00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1000 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3667 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.124 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.135 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.146 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.146 > git config core.sparsecheckout # timeout=10 00:00:04.157 > git read-tree -mu HEAD # timeout=10 00:00:04.173 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.201 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.201 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.288 [Pipeline] Start of Pipeline 00:00:04.305 [Pipeline] library 00:00:04.307 Loading library shm_lib@master 00:00:04.307 Library shm_lib@master is cached. Copying from home. 00:00:04.321 [Pipeline] node 00:00:04.331 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.333 [Pipeline] { 00:00:04.342 [Pipeline] catchError 00:00:04.344 [Pipeline] { 00:00:04.357 [Pipeline] wrap 00:00:04.365 [Pipeline] { 00:00:04.373 [Pipeline] stage 00:00:04.375 [Pipeline] { (Prologue) 00:00:04.392 [Pipeline] echo 00:00:04.394 Node: VM-host-SM38 00:00:04.400 [Pipeline] cleanWs 00:00:04.412 [WS-CLEANUP] Deleting project workspace... 00:00:04.412 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.421 [WS-CLEANUP] done 00:00:04.597 [Pipeline] setCustomBuildProperty 00:00:04.716 [Pipeline] httpRequest 00:00:05.417 [Pipeline] echo 00:00:05.419 Sorcerer 10.211.164.101 is alive 00:00:05.428 [Pipeline] retry 00:00:05.429 [Pipeline] { 00:00:05.438 [Pipeline] httpRequest 00:00:05.443 HttpMethod: GET 00:00:05.444 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.444 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.445 Response Code: HTTP/1.1 200 OK 00:00:05.446 Success: Status code 200 is in the accepted range: 200,404 00:00:05.446 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.089 [Pipeline] } 00:00:06.101 [Pipeline] // retry 00:00:06.107 [Pipeline] sh 00:00:06.392 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.409 [Pipeline] httpRequest 00:00:08.146 [Pipeline] echo 00:00:08.148 Sorcerer 10.211.164.101 is alive 00:00:08.160 [Pipeline] retry 00:00:08.162 [Pipeline] { 00:00:08.180 [Pipeline] httpRequest 00:00:08.186 HttpMethod: GET 00:00:08.187 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.187 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.193 Response Code: HTTP/1.1 200 OK 00:00:08.194 Success: Status code 200 is in the accepted range: 200,404 00:00:08.195 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:26.921 [Pipeline] } 00:00:26.939 [Pipeline] // retry 00:00:26.948 [Pipeline] sh 00:00:27.238 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.565 [Pipeline] sh 00:00:30.852 + git -C spdk log --oneline -n5 00:00:30.852 c13c99a5e test: Various fixes for Fedora40 00:00:30.852 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:30.852 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:30.852 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:30.852 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:30.876 [Pipeline] withCredentials 00:00:30.889 > git --version # timeout=10 00:00:30.901 > git --version # 'git version 2.39.2' 00:00:30.922 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.925 [Pipeline] { 00:00:30.935 [Pipeline] retry 00:00:30.937 [Pipeline] { 00:00:30.955 [Pipeline] sh 00:00:31.246 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:31.260 [Pipeline] } 00:00:31.279 [Pipeline] // retry 00:00:31.286 [Pipeline] } 00:00:31.304 [Pipeline] // withCredentials 00:00:31.316 [Pipeline] httpRequest 00:00:31.736 [Pipeline] echo 00:00:31.738 Sorcerer 10.211.164.101 is alive 00:00:31.749 [Pipeline] retry 00:00:31.751 [Pipeline] { 00:00:31.765 [Pipeline] httpRequest 00:00:31.770 HttpMethod: GET 00:00:31.771 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.771 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.787 Response Code: HTTP/1.1 200 OK 00:00:31.788 Success: Status code 200 is in the accepted range: 200,404 00:00:31.788 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.388 [Pipeline] } 00:01:07.408 [Pipeline] // retry 00:01:07.416 [Pipeline] sh 00:01:07.707 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:09.652 [Pipeline] sh 00:01:10.000 + git -C dpdk log --oneline -n5 00:01:10.000 eeb0605f11 version: 23.11.0 00:01:10.000 238778122a doc: update release notes for 23.11 00:01:10.000 46aa6b3cfc doc: fix description of RSS features 00:01:10.000 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:10.000 7e421ae345 devtools: support skipping forbid rule check 00:01:10.022 [Pipeline] writeFile 00:01:10.037 [Pipeline] sh 00:01:10.324 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.338 [Pipeline] sh 00:01:10.623 + cat autorun-spdk.conf 00:01:10.623 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.623 SPDK_TEST_NVME=1 00:01:10.623 SPDK_TEST_FTL=1 00:01:10.623 SPDK_TEST_ISAL=1 00:01:10.623 SPDK_RUN_ASAN=1 00:01:10.623 SPDK_RUN_UBSAN=1 00:01:10.623 SPDK_TEST_XNVME=1 00:01:10.623 SPDK_TEST_NVME_FDP=1 00:01:10.623 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:10.623 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.623 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.632 RUN_NIGHTLY=1 00:01:10.634 [Pipeline] } 00:01:10.650 [Pipeline] // stage 00:01:10.668 [Pipeline] stage 00:01:10.671 [Pipeline] { (Run VM) 00:01:10.686 [Pipeline] sh 00:01:10.972 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.972 + echo 'Start stage prepare_nvme.sh' 00:01:10.972 Start stage prepare_nvme.sh 00:01:10.972 + [[ -n 8 ]] 00:01:10.972 + disk_prefix=ex8 00:01:10.972 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:10.972 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:10.972 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:10.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.972 ++ SPDK_TEST_NVME=1 00:01:10.972 ++ SPDK_TEST_FTL=1 00:01:10.972 ++ SPDK_TEST_ISAL=1 00:01:10.972 ++ SPDK_RUN_ASAN=1 00:01:10.972 ++ SPDK_RUN_UBSAN=1 00:01:10.972 ++ SPDK_TEST_XNVME=1 00:01:10.972 ++ SPDK_TEST_NVME_FDP=1 00:01:10.972 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:10.972 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.972 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.972 ++ RUN_NIGHTLY=1 00:01:10.972 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:10.972 + nvme_files=() 00:01:10.972 + declare -A nvme_files 00:01:10.972 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.972 + nvme_files['nvme.img']=5G 00:01:10.972 + nvme_files['nvme-cmb.img']=5G 00:01:10.972 + nvme_files['nvme-multi0.img']=4G 00:01:10.972 + nvme_files['nvme-multi1.img']=4G 00:01:10.972 + nvme_files['nvme-multi2.img']=4G 00:01:10.972 + nvme_files['nvme-openstack.img']=8G 00:01:10.972 + nvme_files['nvme-zns.img']=5G 00:01:10.972 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.972 + (( SPDK_TEST_FTL == 1 )) 00:01:10.972 + nvme_files["nvme-ftl.img"]=6G 00:01:10.972 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.972 + nvme_files["nvme-fdp.img"]=1G 00:01:10.972 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:10.972 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:10.972 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:10.972 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:10.972 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:10.972 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.972 + for nvme in "${!nvme_files[@]}" 00:01:10.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:11.233 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.233 + for nvme in "${!nvme_files[@]}" 00:01:11.233 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:11.233 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.233 + for nvme in "${!nvme_files[@]}" 00:01:11.233 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:11.233 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:11.233 + for nvme in "${!nvme_files[@]}" 00:01:11.233 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:11.495 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.495 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:11.495 + echo 'End stage prepare_nvme.sh' 00:01:11.495 End stage prepare_nvme.sh 00:01:11.508 [Pipeline] sh 00:01:11.795 + DISTRO=fedora39 00:01:11.795 + CPUS=10 00:01:11.795 + RAM=12288 00:01:11.795 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.795 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:11.795 00:01:11.795 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:11.795 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:11.795 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:11.795 HELP=0 00:01:11.795 DRY_RUN=0 00:01:11.795 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:11.795 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:11.795 NVME_AUTO_CREATE=0 00:01:11.795 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:11.795 NVME_CMB=,,,, 00:01:11.795 NVME_PMR=,,,, 00:01:11.795 NVME_ZNS=,,,, 00:01:11.795 NVME_MS=true,,,, 00:01:11.795 NVME_FDP=,,,on, 00:01:11.795 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.795 SPDK_VAGRANT_VMCPU=10 00:01:11.795 SPDK_VAGRANT_VMRAM=12288 00:01:11.795 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.795 SPDK_VAGRANT_HTTP_PROXY= 00:01:11.795 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.795 SPDK_OPENSTACK_NETWORK=0 00:01:11.795 VAGRANT_PACKAGE_BOX=0 00:01:11.795 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.795 FORCE_DISTRO=true 00:01:11.795 VAGRANT_BOX_VERSION= 00:01:11.795 EXTRA_VAGRANTFILES= 00:01:11.795 NIC_MODEL=e1000 00:01:11.795 00:01:11.795 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:11.795 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:14.347 Bringing machine 'default' up with 'libvirt' provider... 00:01:14.609 ==> default: Creating image (snapshot of base box volume). 00:01:14.872 ==> default: Creating domain with the following settings... 00:01:14.872 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732593436_fe7b69af6728e96b6edb 00:01:14.872 ==> default: -- Domain type: kvm 00:01:14.872 ==> default: -- Cpus: 10 00:01:14.872 ==> default: -- Feature: acpi 00:01:14.872 ==> default: -- Feature: apic 00:01:14.872 ==> default: -- Feature: pae 00:01:14.872 ==> default: -- Memory: 12288M 00:01:14.872 ==> default: -- Memory Backing: hugepages: 00:01:14.872 ==> default: -- Management MAC: 00:01:14.872 ==> default: -- Loader: 00:01:14.872 ==> default: -- Nvram: 00:01:14.872 ==> default: -- Base box: spdk/fedora39 00:01:14.872 ==> default: -- Storage pool: default 00:01:14.872 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732593436_fe7b69af6728e96b6edb.img (20G) 00:01:14.872 ==> default: -- Volume Cache: default 00:01:14.872 ==> default: -- Kernel: 00:01:14.872 ==> default: -- Initrd: 00:01:14.872 ==> default: -- Graphics Type: vnc 00:01:14.872 ==> default: -- Graphics Port: -1 00:01:14.872 ==> default: -- Graphics IP: 127.0.0.1 00:01:14.872 ==> default: -- Graphics Password: Not defined 00:01:14.872 ==> default: -- Video Type: cirrus 00:01:14.872 ==> default: -- Video VRAM: 9216 00:01:14.872 ==> default: -- Sound Type: 00:01:14.872 ==> default: -- Keymap: en-us 00:01:14.872 ==> default: -- TPM Path: 00:01:14.872 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:14.872 ==> default: -- Command line args: 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:14.872 ==> default: -> value=-drive, 00:01:14.872 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:14.872 ==> default: -> value=-device, 00:01:14.872 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.872 ==> default: Creating shared folders metadata... 00:01:15.135 ==> default: Starting domain. 00:01:17.685 ==> default: Waiting for domain to get an IP address... 00:01:35.826 ==> default: Waiting for SSH to become available... 00:01:35.826 ==> default: Configuring and enabling network interfaces... 00:01:38.369 default: SSH address: 192.168.121.107:22 00:01:38.370 default: SSH username: vagrant 00:01:38.370 default: SSH auth method: private key 00:01:40.280 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.425 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:53.721 ==> default: Mounting SSHFS shared folder... 00:01:55.638 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.638 ==> default: Checking Mount.. 00:01:57.024 ==> default: Folder Successfully Mounted! 00:01:57.024 00:01:57.024 SUCCESS! 00:01:57.024 00:01:57.024 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.024 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.024 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:57.024 00:01:57.036 [Pipeline] } 00:01:57.052 [Pipeline] // stage 00:01:57.061 [Pipeline] dir 00:01:57.062 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:57.064 [Pipeline] { 00:01:57.076 [Pipeline] catchError 00:01:57.078 [Pipeline] { 00:01:57.091 [Pipeline] sh 00:01:57.376 + vagrant ssh-config --host vagrant 00:01:57.376 + sed -ne '/^Host/,$p' 00:01:57.376 + tee ssh_conf 00:02:00.679 Host vagrant 00:02:00.679 HostName 192.168.121.107 00:02:00.679 User vagrant 00:02:00.679 Port 22 00:02:00.679 UserKnownHostsFile /dev/null 00:02:00.679 StrictHostKeyChecking no 00:02:00.679 PasswordAuthentication no 00:02:00.679 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:00.679 IdentitiesOnly yes 00:02:00.679 LogLevel FATAL 00:02:00.679 ForwardAgent yes 00:02:00.679 ForwardX11 yes 00:02:00.679 00:02:00.694 [Pipeline] withEnv 00:02:00.697 [Pipeline] { 00:02:00.711 [Pipeline] sh 00:02:00.997 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:00.997 source /etc/os-release 00:02:00.997 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.997 # Minimal, systemd-like check. 00:02:00.997 if [[ -e /.dockerenv ]]; then 00:02:00.997 # Clear garbage from the node'\''s name: 00:02:00.997 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.997 # $HOSTNAME is the actual container id 00:02:00.997 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.997 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.997 # We can assume this is a mount from a host where container is running, 00:02:00.997 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.997 container="$(< /etc/hostname) ($agent)" 00:02:00.997 else 00:02:00.997 # Fallback 00:02:00.997 container=$agent 00:02:00.997 fi 00:02:00.997 fi 00:02:00.997 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.997 ' 00:02:01.274 [Pipeline] } 00:02:01.290 [Pipeline] // withEnv 00:02:01.298 [Pipeline] setCustomBuildProperty 00:02:01.313 [Pipeline] stage 00:02:01.315 [Pipeline] { (Tests) 00:02:01.332 [Pipeline] sh 00:02:01.614 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.892 [Pipeline] sh 00:02:02.178 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.458 [Pipeline] timeout 00:02:02.458 Timeout set to expire in 50 min 00:02:02.460 [Pipeline] { 00:02:02.475 [Pipeline] sh 00:02:02.763 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:03.372 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:03.386 [Pipeline] sh 00:02:03.672 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:03.949 [Pipeline] sh 00:02:04.235 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.514 [Pipeline] sh 00:02:04.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:05.061 ++ readlink -f spdk_repo 00:02:05.062 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:05.062 + [[ -n /home/vagrant/spdk_repo ]] 00:02:05.062 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:05.062 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:05.062 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:05.062 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:05.062 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:05.062 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:05.062 + cd /home/vagrant/spdk_repo 00:02:05.062 + source /etc/os-release 00:02:05.062 ++ NAME='Fedora Linux' 00:02:05.062 ++ VERSION='39 (Cloud Edition)' 00:02:05.062 ++ ID=fedora 00:02:05.062 ++ VERSION_ID=39 00:02:05.062 ++ VERSION_CODENAME= 00:02:05.062 ++ PLATFORM_ID=platform:f39 00:02:05.062 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.062 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.062 ++ LOGO=fedora-logo-icon 00:02:05.062 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.062 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.062 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.062 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.062 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.062 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.062 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.062 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.062 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.062 ++ SUPPORT_END=2024-11-12 00:02:05.062 ++ VARIANT='Cloud Edition' 00:02:05.062 ++ VARIANT_ID=cloud 00:02:05.062 + uname -a 00:02:05.062 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.062 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.062 Hugepages 00:02:05.062 node hugesize free / total 00:02:05.062 node0 1048576kB 0 / 0 00:02:05.062 node0 2048kB 0 / 0 00:02:05.062 00:02:05.062 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.062 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.062 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:05.062 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:05.323 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:05.323 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:05.323 + rm -f /tmp/spdk-ld-path 00:02:05.323 + source autorun-spdk.conf 00:02:05.323 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.323 ++ SPDK_TEST_NVME=1 00:02:05.323 ++ SPDK_TEST_FTL=1 00:02:05.323 ++ SPDK_TEST_ISAL=1 00:02:05.323 ++ SPDK_RUN_ASAN=1 00:02:05.323 ++ SPDK_RUN_UBSAN=1 00:02:05.323 ++ SPDK_TEST_XNVME=1 00:02:05.323 ++ SPDK_TEST_NVME_FDP=1 00:02:05.323 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.323 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.323 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.323 ++ RUN_NIGHTLY=1 00:02:05.323 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.323 + [[ -n '' ]] 00:02:05.323 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.323 + for M in /var/spdk/build-*-manifest.txt 00:02:05.323 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.323 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.323 + for M in /var/spdk/build-*-manifest.txt 00:02:05.323 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.323 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.323 + for M in /var/spdk/build-*-manifest.txt 00:02:05.323 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.323 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.323 ++ uname 00:02:05.323 + [[ Linux == \L\i\n\u\x ]] 00:02:05.323 + sudo dmesg -T 00:02:05.323 + sudo dmesg --clear 00:02:05.323 + dmesg_pid=5712 00:02:05.323 + [[ Fedora Linux == FreeBSD ]] 00:02:05.323 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.323 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.323 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.323 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.323 + sudo dmesg -Tw 00:02:05.323 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.323 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.323 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.323 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.323 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.323 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.323 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.323 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.323 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.323 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.323 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.323 Test configuration: 00:02:05.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.323 SPDK_TEST_NVME=1 00:02:05.323 SPDK_TEST_FTL=1 00:02:05.323 SPDK_TEST_ISAL=1 00:02:05.323 SPDK_RUN_ASAN=1 00:02:05.323 SPDK_RUN_UBSAN=1 00:02:05.323 SPDK_TEST_XNVME=1 00:02:05.323 SPDK_TEST_NVME_FDP=1 00:02:05.323 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.323 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.323 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.323 RUN_NIGHTLY=1 03:58:07 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:05.323 03:58:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.323 03:58:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.324 03:58:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.324 03:58:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.324 03:58:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.324 03:58:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.324 03:58:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.324 03:58:07 -- paths/export.sh@5 -- $ export PATH 00:02:05.324 03:58:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.586 03:58:07 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.586 03:58:07 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:05.586 03:58:07 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732593487.XXXXXX 00:02:05.586 03:58:07 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732593487.equFXC 00:02:05.586 03:58:07 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:05.586 03:58:07 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:05.586 03:58:07 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.586 03:58:07 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.586 03:58:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.586 03:58:07 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.586 03:58:07 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:05.586 03:58:07 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:05.586 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.586 03:58:07 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:05.586 03:58:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.586 03:58:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.586 03:58:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.586 03:58:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.586 Tue Nov 26 03:58:07 AM UTC 2024 00:02:05.586 03:58:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.586 LTS-67-gc13c99a5e 00:02:05.586 03:58:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:05.586 03:58:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:05.586 03:58:07 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:05.586 03:58:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:05.586 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.586 ************************************ 00:02:05.586 START TEST asan 00:02:05.586 ************************************ 00:02:05.586 using asan 00:02:05.586 03:58:07 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:05.586 00:02:05.586 real 0m0.000s 00:02:05.586 user 0m0.000s 00:02:05.586 sys 0m0.000s 00:02:05.586 ************************************ 00:02:05.586 END TEST asan 00:02:05.586 ************************************ 00:02:05.586 03:58:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:05.586 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.586 03:58:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.587 03:58:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.587 03:58:07 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:05.587 03:58:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:05.587 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.587 ************************************ 00:02:05.587 START TEST ubsan 00:02:05.587 ************************************ 00:02:05.587 using ubsan 00:02:05.587 03:58:07 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:05.587 00:02:05.587 real 0m0.000s 00:02:05.587 user 0m0.000s 00:02:05.587 sys 0m0.000s 00:02:05.587 03:58:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:05.587 ************************************ 00:02:05.587 END TEST ubsan 00:02:05.587 ************************************ 00:02:05.587 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.587 03:58:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:05.587 03:58:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:05.587 03:58:07 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:05.587 03:58:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:05.587 03:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.587 ************************************ 00:02:05.587 START TEST build_native_dpdk 00:02:05.587 ************************************ 00:02:05.587 03:58:07 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:05.587 03:58:07 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:05.587 03:58:07 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:05.587 03:58:07 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:05.587 03:58:07 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:05.587 03:58:07 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:05.587 03:58:07 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:05.587 03:58:07 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:05.587 03:58:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:05.587 03:58:07 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:05.587 03:58:07 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:05.587 03:58:07 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:05.587 03:58:07 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.587 03:58:07 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:05.587 eeb0605f11 version: 23.11.0 00:02:05.587 238778122a doc: update release notes for 23.11 00:02:05.587 46aa6b3cfc doc: fix description of RSS features 00:02:05.587 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:05.587 7e421ae345 devtools: support skipping forbid rule check 00:02:05.587 03:58:07 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:05.587 03:58:07 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:05.587 03:58:07 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:05.587 03:58:07 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:05.587 03:58:07 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:05.587 03:58:07 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:05.587 03:58:07 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:05.587 03:58:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:05.587 03:58:07 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:05.587 03:58:07 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:05.587 03:58:07 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:05.587 03:58:07 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:05.587 03:58:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:05.587 03:58:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.587 03:58:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.587 03:58:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.587 03:58:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.587 03:58:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.587 03:58:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.587 03:58:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.587 03:58:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.587 03:58:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.587 03:58:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.587 03:58:07 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.587 03:58:07 -- scripts/common.sh@344 -- $ : 1 00:02:05.587 03:58:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.587 03:58:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.587 03:58:07 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.587 03:58:07 -- scripts/common.sh@352 -- $ local d=23 00:02:05.587 03:58:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.587 03:58:07 -- scripts/common.sh@354 -- $ echo 23 00:02:05.587 03:58:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.587 03:58:07 -- scripts/common.sh@365 -- $ decimal 21 00:02:05.587 03:58:07 -- scripts/common.sh@352 -- $ local d=21 00:02:05.587 03:58:07 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:05.587 03:58:07 -- scripts/common.sh@354 -- $ echo 21 00:02:05.587 03:58:07 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:05.587 03:58:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.587 03:58:07 -- scripts/common.sh@366 -- $ return 1 00:02:05.587 03:58:07 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:05.587 patching file config/rte_config.h 00:02:05.587 Hunk #1 succeeded at 60 (offset 1 line). 00:02:05.587 03:58:07 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:05.587 03:58:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:05.587 03:58:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.587 03:58:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.587 03:58:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.587 03:58:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.587 03:58:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.587 03:58:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.587 03:58:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.587 03:58:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.587 03:58:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.587 03:58:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.587 03:58:07 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.587 03:58:07 -- scripts/common.sh@344 -- $ : 1 00:02:05.587 03:58:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.587 03:58:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.587 03:58:07 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.587 03:58:07 -- scripts/common.sh@352 -- $ local d=23 00:02:05.587 03:58:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.587 03:58:07 -- scripts/common.sh@354 -- $ echo 23 00:02:05.587 03:58:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.587 03:58:07 -- scripts/common.sh@365 -- $ decimal 24 00:02:05.587 03:58:07 -- scripts/common.sh@352 -- $ local d=24 00:02:05.588 03:58:07 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.588 03:58:07 -- scripts/common.sh@354 -- $ echo 24 00:02:05.588 03:58:07 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:05.588 03:58:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.588 03:58:07 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:05.588 03:58:07 -- scripts/common.sh@367 -- $ return 0 00:02:05.588 03:58:07 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:05.849 patching file lib/pcapng/rte_pcapng.c 00:02:05.849 03:58:07 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:05.849 03:58:07 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:05.849 03:58:07 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:05.849 03:58:07 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:05.849 03:58:07 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:11.144 The Meson build system 00:02:11.144 Version: 1.5.0 00:02:11.144 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:11.144 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:11.144 Build type: native build 00:02:11.144 Program cat found: YES (/usr/bin/cat) 00:02:11.144 Project name: DPDK 00:02:11.144 Project version: 23.11.0 00:02:11.144 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.144 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:11.144 Host machine cpu family: x86_64 00:02:11.144 Host machine cpu: x86_64 00:02:11.144 Message: ## Building in Developer Mode ## 00:02:11.144 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.144 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:11.144 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.144 Program python3 found: YES (/usr/bin/python3) 00:02:11.144 Program cat found: YES (/usr/bin/cat) 00:02:11.144 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:11.144 Compiler for C supports arguments -march=native: YES 00:02:11.144 Checking for size of "void *" : 8 00:02:11.144 Checking for size of "void *" : 8 (cached) 00:02:11.144 Library m found: YES 00:02:11.144 Library numa found: YES 00:02:11.144 Has header "numaif.h" : YES 00:02:11.144 Library fdt found: NO 00:02:11.144 Library execinfo found: NO 00:02:11.144 Has header "execinfo.h" : YES 00:02:11.144 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.144 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.144 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.144 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.144 Run-time dependency openssl found: YES 3.1.1 00:02:11.144 Run-time dependency libpcap found: YES 1.10.4 00:02:11.144 Has header "pcap.h" with dependency libpcap: YES 00:02:11.144 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.144 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.144 Compiler for C supports arguments -Wformat: YES 00:02:11.144 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.144 Compiler for C supports arguments -Wformat-security: NO 00:02:11.144 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.144 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.144 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.144 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.144 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.144 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.144 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.144 Compiler for C supports arguments -Wundef: YES 00:02:11.144 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.144 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.144 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.144 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.144 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.144 Program objdump found: YES (/usr/bin/objdump) 00:02:11.144 Compiler for C supports arguments -mavx512f: YES 00:02:11.144 Checking if "AVX512 checking" compiles: YES 00:02:11.144 Fetching value of define "__SSE4_2__" : 1 00:02:11.145 Fetching value of define "__AES__" : 1 00:02:11.145 Fetching value of define "__AVX__" : 1 00:02:11.145 Fetching value of define "__AVX2__" : 1 00:02:11.145 Fetching value of define "__AVX512BW__" : 1 00:02:11.145 Fetching value of define "__AVX512CD__" : 1 00:02:11.145 Fetching value of define "__AVX512DQ__" : 1 00:02:11.145 Fetching value of define "__AVX512F__" : 1 00:02:11.145 Fetching value of define "__AVX512VL__" : 1 00:02:11.145 Fetching value of define "__PCLMUL__" : 1 00:02:11.145 Fetching value of define "__RDRND__" : 1 00:02:11.145 Fetching value of define "__RDSEED__" : 1 00:02:11.145 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:11.145 Fetching value of define "__znver1__" : (undefined) 00:02:11.145 Fetching value of define "__znver2__" : (undefined) 00:02:11.145 Fetching value of define "__znver3__" : (undefined) 00:02:11.145 Fetching value of define "__znver4__" : (undefined) 00:02:11.145 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.145 Message: lib/log: Defining dependency "log" 00:02:11.145 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.145 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.145 Checking for function "getentropy" : NO 00:02:11.145 Message: lib/eal: Defining dependency "eal" 00:02:11.145 Message: lib/ring: Defining dependency "ring" 00:02:11.145 Message: lib/rcu: Defining dependency "rcu" 00:02:11.145 Message: lib/mempool: Defining dependency "mempool" 00:02:11.145 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.145 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.145 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:11.145 Compiler for C supports arguments -mpclmul: YES 00:02:11.145 Compiler for C supports arguments -maes: YES 00:02:11.145 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.145 Compiler for C supports arguments -mavx512bw: YES 00:02:11.145 Compiler for C supports arguments -mavx512dq: YES 00:02:11.145 Compiler for C supports arguments -mavx512vl: YES 00:02:11.145 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.145 Compiler for C supports arguments -mavx2: YES 00:02:11.145 Compiler for C supports arguments -mavx: YES 00:02:11.145 Message: lib/net: Defining dependency "net" 00:02:11.145 Message: lib/meter: Defining dependency "meter" 00:02:11.145 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.145 Message: lib/pci: Defining dependency "pci" 00:02:11.145 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.145 Message: lib/metrics: Defining dependency "metrics" 00:02:11.145 Message: lib/hash: Defining dependency "hash" 00:02:11.145 Message: lib/timer: Defining dependency "timer" 00:02:11.145 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.145 Message: lib/acl: Defining dependency "acl" 00:02:11.145 Message: lib/bbdev: Defining dependency "bbdev" 00:02:11.145 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:11.145 Run-time dependency libelf found: YES 0.191 00:02:11.145 Message: lib/bpf: Defining dependency "bpf" 00:02:11.145 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:11.145 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.145 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.145 Message: lib/distributor: Defining dependency "distributor" 00:02:11.145 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.145 Message: lib/efd: Defining dependency "efd" 00:02:11.145 Message: lib/eventdev: Defining dependency "eventdev" 00:02:11.145 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:11.145 Message: lib/gpudev: Defining dependency "gpudev" 00:02:11.145 Message: lib/gro: Defining dependency "gro" 00:02:11.145 Message: lib/gso: Defining dependency "gso" 00:02:11.145 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:11.145 Message: lib/jobstats: Defining dependency "jobstats" 00:02:11.145 Message: lib/latencystats: Defining dependency "latencystats" 00:02:11.145 Message: lib/lpm: Defining dependency "lpm" 00:02:11.145 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512IFMA__" : 1 00:02:11.145 Message: lib/member: Defining dependency "member" 00:02:11.145 Message: lib/pcapng: Defining dependency "pcapng" 00:02:11.145 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.145 Message: lib/power: Defining dependency "power" 00:02:11.145 Message: lib/rawdev: Defining dependency "rawdev" 00:02:11.145 Message: lib/regexdev: Defining dependency "regexdev" 00:02:11.145 Message: lib/mldev: Defining dependency "mldev" 00:02:11.145 Message: lib/rib: Defining dependency "rib" 00:02:11.145 Message: lib/reorder: Defining dependency "reorder" 00:02:11.145 Message: lib/sched: Defining dependency "sched" 00:02:11.145 Message: lib/security: Defining dependency "security" 00:02:11.145 Message: lib/stack: Defining dependency "stack" 00:02:11.145 Has header "linux/userfaultfd.h" : YES 00:02:11.145 Has header "linux/vduse.h" : YES 00:02:11.145 Message: lib/vhost: Defining dependency "vhost" 00:02:11.145 Message: lib/ipsec: Defining dependency "ipsec" 00:02:11.145 Message: lib/pdcp: Defining dependency "pdcp" 00:02:11.145 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.145 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.145 Message: lib/fib: Defining dependency "fib" 00:02:11.145 Message: lib/port: Defining dependency "port" 00:02:11.145 Message: lib/pdump: Defining dependency "pdump" 00:02:11.145 Message: lib/table: Defining dependency "table" 00:02:11.145 Message: lib/pipeline: Defining dependency "pipeline" 00:02:11.145 Message: lib/graph: Defining dependency "graph" 00:02:11.145 Message: lib/node: Defining dependency "node" 00:02:11.145 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.145 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.145 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.145 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.091 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.091 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.091 Compiler for C supports arguments -Wno-format: YES 00:02:12.091 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.091 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:12.091 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.091 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:12.091 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:12.091 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.091 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.091 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.091 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.091 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:12.091 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:12.091 Has header "sys/epoll.h" : YES 00:02:12.091 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.091 Configuring doxy-api-html.conf using configuration 00:02:12.091 Configuring doxy-api-man.conf using configuration 00:02:12.091 Program mandb found: YES (/usr/bin/mandb) 00:02:12.091 Program sphinx-build found: NO 00:02:12.091 Configuring rte_build_config.h using configuration 00:02:12.091 Message: 00:02:12.091 ================= 00:02:12.091 Applications Enabled 00:02:12.091 ================= 00:02:12.091 00:02:12.091 apps: 00:02:12.091 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:12.091 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:12.091 test-pmd, test-regex, test-sad, test-security-perf, 00:02:12.091 00:02:12.091 Message: 00:02:12.091 ================= 00:02:12.091 Libraries Enabled 00:02:12.091 ================= 00:02:12.091 00:02:12.091 libs: 00:02:12.091 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.091 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:12.091 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:12.091 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:12.091 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:12.091 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:12.091 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:12.091 00:02:12.091 00:02:12.091 Message: 00:02:12.091 =============== 00:02:12.091 Drivers Enabled 00:02:12.091 =============== 00:02:12.091 00:02:12.091 common: 00:02:12.091 00:02:12.091 bus: 00:02:12.091 pci, vdev, 00:02:12.091 mempool: 00:02:12.091 ring, 00:02:12.091 dma: 00:02:12.091 00:02:12.091 net: 00:02:12.091 i40e, 00:02:12.091 raw: 00:02:12.091 00:02:12.091 crypto: 00:02:12.091 00:02:12.091 compress: 00:02:12.091 00:02:12.091 regex: 00:02:12.091 00:02:12.091 ml: 00:02:12.091 00:02:12.091 vdpa: 00:02:12.091 00:02:12.091 event: 00:02:12.091 00:02:12.091 baseband: 00:02:12.091 00:02:12.091 gpu: 00:02:12.091 00:02:12.091 00:02:12.091 Message: 00:02:12.091 ================= 00:02:12.091 Content Skipped 00:02:12.091 ================= 00:02:12.091 00:02:12.091 apps: 00:02:12.091 00:02:12.091 libs: 00:02:12.091 00:02:12.091 drivers: 00:02:12.091 common/cpt: not in enabled drivers build config 00:02:12.091 common/dpaax: not in enabled drivers build config 00:02:12.091 common/iavf: not in enabled drivers build config 00:02:12.091 common/idpf: not in enabled drivers build config 00:02:12.091 common/mvep: not in enabled drivers build config 00:02:12.091 common/octeontx: not in enabled drivers build config 00:02:12.091 bus/auxiliary: not in enabled drivers build config 00:02:12.091 bus/cdx: not in enabled drivers build config 00:02:12.091 bus/dpaa: not in enabled drivers build config 00:02:12.091 bus/fslmc: not in enabled drivers build config 00:02:12.091 bus/ifpga: not in enabled drivers build config 00:02:12.091 bus/platform: not in enabled drivers build config 00:02:12.091 bus/vmbus: not in enabled drivers build config 00:02:12.091 common/cnxk: not in enabled drivers build config 00:02:12.091 common/mlx5: not in enabled drivers build config 00:02:12.091 common/nfp: not in enabled drivers build config 00:02:12.091 common/qat: not in enabled drivers build config 00:02:12.091 common/sfc_efx: not in enabled drivers build config 00:02:12.091 mempool/bucket: not in enabled drivers build config 00:02:12.091 mempool/cnxk: not in enabled drivers build config 00:02:12.091 mempool/dpaa: not in enabled drivers build config 00:02:12.091 mempool/dpaa2: not in enabled drivers build config 00:02:12.091 mempool/octeontx: not in enabled drivers build config 00:02:12.092 mempool/stack: not in enabled drivers build config 00:02:12.092 dma/cnxk: not in enabled drivers build config 00:02:12.092 dma/dpaa: not in enabled drivers build config 00:02:12.092 dma/dpaa2: not in enabled drivers build config 00:02:12.092 dma/hisilicon: not in enabled drivers build config 00:02:12.092 dma/idxd: not in enabled drivers build config 00:02:12.092 dma/ioat: not in enabled drivers build config 00:02:12.092 dma/skeleton: not in enabled drivers build config 00:02:12.092 net/af_packet: not in enabled drivers build config 00:02:12.092 net/af_xdp: not in enabled drivers build config 00:02:12.092 net/ark: not in enabled drivers build config 00:02:12.092 net/atlantic: not in enabled drivers build config 00:02:12.092 net/avp: not in enabled drivers build config 00:02:12.092 net/axgbe: not in enabled drivers build config 00:02:12.092 net/bnx2x: not in enabled drivers build config 00:02:12.092 net/bnxt: not in enabled drivers build config 00:02:12.092 net/bonding: not in enabled drivers build config 00:02:12.092 net/cnxk: not in enabled drivers build config 00:02:12.092 net/cpfl: not in enabled drivers build config 00:02:12.092 net/cxgbe: not in enabled drivers build config 00:02:12.092 net/dpaa: not in enabled drivers build config 00:02:12.092 net/dpaa2: not in enabled drivers build config 00:02:12.092 net/e1000: not in enabled drivers build config 00:02:12.092 net/ena: not in enabled drivers build config 00:02:12.092 net/enetc: not in enabled drivers build config 00:02:12.092 net/enetfec: not in enabled drivers build config 00:02:12.092 net/enic: not in enabled drivers build config 00:02:12.092 net/failsafe: not in enabled drivers build config 00:02:12.092 net/fm10k: not in enabled drivers build config 00:02:12.092 net/gve: not in enabled drivers build config 00:02:12.092 net/hinic: not in enabled drivers build config 00:02:12.092 net/hns3: not in enabled drivers build config 00:02:12.092 net/iavf: not in enabled drivers build config 00:02:12.092 net/ice: not in enabled drivers build config 00:02:12.092 net/idpf: not in enabled drivers build config 00:02:12.092 net/igc: not in enabled drivers build config 00:02:12.092 net/ionic: not in enabled drivers build config 00:02:12.092 net/ipn3ke: not in enabled drivers build config 00:02:12.092 net/ixgbe: not in enabled drivers build config 00:02:12.092 net/mana: not in enabled drivers build config 00:02:12.092 net/memif: not in enabled drivers build config 00:02:12.092 net/mlx4: not in enabled drivers build config 00:02:12.092 net/mlx5: not in enabled drivers build config 00:02:12.092 net/mvneta: not in enabled drivers build config 00:02:12.092 net/mvpp2: not in enabled drivers build config 00:02:12.092 net/netvsc: not in enabled drivers build config 00:02:12.092 net/nfb: not in enabled drivers build config 00:02:12.092 net/nfp: not in enabled drivers build config 00:02:12.092 net/ngbe: not in enabled drivers build config 00:02:12.092 net/null: not in enabled drivers build config 00:02:12.092 net/octeontx: not in enabled drivers build config 00:02:12.092 net/octeon_ep: not in enabled drivers build config 00:02:12.092 net/pcap: not in enabled drivers build config 00:02:12.092 net/pfe: not in enabled drivers build config 00:02:12.092 net/qede: not in enabled drivers build config 00:02:12.092 net/ring: not in enabled drivers build config 00:02:12.092 net/sfc: not in enabled drivers build config 00:02:12.092 net/softnic: not in enabled drivers build config 00:02:12.092 net/tap: not in enabled drivers build config 00:02:12.092 net/thunderx: not in enabled drivers build config 00:02:12.092 net/txgbe: not in enabled drivers build config 00:02:12.092 net/vdev_netvsc: not in enabled drivers build config 00:02:12.092 net/vhost: not in enabled drivers build config 00:02:12.092 net/virtio: not in enabled drivers build config 00:02:12.092 net/vmxnet3: not in enabled drivers build config 00:02:12.092 raw/cnxk_bphy: not in enabled drivers build config 00:02:12.092 raw/cnxk_gpio: not in enabled drivers build config 00:02:12.092 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:12.092 raw/ifpga: not in enabled drivers build config 00:02:12.092 raw/ntb: not in enabled drivers build config 00:02:12.092 raw/skeleton: not in enabled drivers build config 00:02:12.092 crypto/armv8: not in enabled drivers build config 00:02:12.092 crypto/bcmfs: not in enabled drivers build config 00:02:12.092 crypto/caam_jr: not in enabled drivers build config 00:02:12.092 crypto/ccp: not in enabled drivers build config 00:02:12.092 crypto/cnxk: not in enabled drivers build config 00:02:12.092 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.092 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.092 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.092 crypto/mlx5: not in enabled drivers build config 00:02:12.092 crypto/mvsam: not in enabled drivers build config 00:02:12.092 crypto/nitrox: not in enabled drivers build config 00:02:12.092 crypto/null: not in enabled drivers build config 00:02:12.092 crypto/octeontx: not in enabled drivers build config 00:02:12.092 crypto/openssl: not in enabled drivers build config 00:02:12.092 crypto/scheduler: not in enabled drivers build config 00:02:12.092 crypto/uadk: not in enabled drivers build config 00:02:12.092 crypto/virtio: not in enabled drivers build config 00:02:12.092 compress/isal: not in enabled drivers build config 00:02:12.092 compress/mlx5: not in enabled drivers build config 00:02:12.092 compress/octeontx: not in enabled drivers build config 00:02:12.092 compress/zlib: not in enabled drivers build config 00:02:12.092 regex/mlx5: not in enabled drivers build config 00:02:12.092 regex/cn9k: not in enabled drivers build config 00:02:12.092 ml/cnxk: not in enabled drivers build config 00:02:12.092 vdpa/ifc: not in enabled drivers build config 00:02:12.092 vdpa/mlx5: not in enabled drivers build config 00:02:12.092 vdpa/nfp: not in enabled drivers build config 00:02:12.092 vdpa/sfc: not in enabled drivers build config 00:02:12.092 event/cnxk: not in enabled drivers build config 00:02:12.092 event/dlb2: not in enabled drivers build config 00:02:12.092 event/dpaa: not in enabled drivers build config 00:02:12.092 event/dpaa2: not in enabled drivers build config 00:02:12.092 event/dsw: not in enabled drivers build config 00:02:12.092 event/opdl: not in enabled drivers build config 00:02:12.092 event/skeleton: not in enabled drivers build config 00:02:12.092 event/sw: not in enabled drivers build config 00:02:12.092 event/octeontx: not in enabled drivers build config 00:02:12.092 baseband/acc: not in enabled drivers build config 00:02:12.092 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:12.092 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:12.092 baseband/la12xx: not in enabled drivers build config 00:02:12.092 baseband/null: not in enabled drivers build config 00:02:12.092 baseband/turbo_sw: not in enabled drivers build config 00:02:12.092 gpu/cuda: not in enabled drivers build config 00:02:12.092 00:02:12.092 00:02:12.092 Build targets in project: 215 00:02:12.092 00:02:12.092 DPDK 23.11.0 00:02:12.092 00:02:12.092 User defined options 00:02:12.092 libdir : lib 00:02:12.092 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:12.092 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:12.092 c_link_args : 00:02:12.092 enable_docs : false 00:02:12.092 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:12.092 enable_kmods : false 00:02:12.092 machine : native 00:02:12.092 tests : false 00:02:12.092 00:02:12.092 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.092 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:12.092 03:58:13 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:12.092 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:12.093 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.093 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.093 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.093 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.093 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.355 [6/705] Linking static target lib/librte_kvargs.a 00:02:12.355 [7/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.355 [8/705] Linking static target lib/librte_log.a 00:02:12.355 [9/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.355 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.355 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.355 [12/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.355 [13/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.355 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.616 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.616 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.616 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.616 [18/705] Linking target lib/librte_log.so.24.0 00:02:12.878 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.878 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.878 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.878 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.878 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.878 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.878 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.140 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.140 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.140 [28/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:13.140 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.140 [30/705] Linking static target lib/librte_telemetry.a 00:02:13.140 [31/705] Linking target lib/librte_kvargs.so.24.0 00:02:13.140 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.140 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.140 [34/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:13.401 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.401 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.401 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.401 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.401 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.401 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.401 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.401 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.663 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.663 [44/705] Linking target lib/librte_telemetry.so.24.0 00:02:13.663 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.663 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:13.663 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.925 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.925 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.925 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.925 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.925 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.925 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.925 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.925 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.925 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.188 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.188 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.188 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.188 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.188 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.188 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.188 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.188 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.188 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.188 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.188 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.449 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.449 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.449 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.449 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.449 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.449 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.449 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.707 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.707 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.707 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.707 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.707 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.707 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.001 [81/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.001 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.001 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.001 [84/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.001 [85/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.001 [86/705] Linking static target lib/librte_ring.a 00:02:15.001 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.259 [88/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.259 [89/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.259 [90/705] Linking static target lib/librte_eal.a 00:02:15.259 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.259 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.259 [93/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.259 [94/705] Linking static target lib/librte_mempool.a 00:02:15.259 [95/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.259 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.518 [97/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.518 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.518 [99/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.518 [100/705] Linking static target lib/librte_rcu.a 00:02:15.518 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.518 [102/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:15.518 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.776 [104/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.776 [105/705] Linking static target lib/librte_net.a 00:02:15.776 [106/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.776 [107/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.776 [108/705] Linking static target lib/librte_meter.a 00:02:15.776 [109/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.776 [110/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.776 [111/705] Linking static target lib/librte_mbuf.a 00:02:15.776 [112/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.776 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.776 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.035 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.035 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.035 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.035 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.292 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.292 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.550 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.550 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.809 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.809 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.809 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.809 [126/705] Linking static target lib/librte_pci.a 00:02:16.809 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.809 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.809 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.809 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.809 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.067 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.067 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.067 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.067 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.067 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.067 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.067 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.067 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.067 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.067 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.067 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.067 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.067 [144/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.067 [145/705] Linking static target lib/librte_cmdline.a 00:02:17.326 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:17.326 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:17.326 [148/705] Linking static target lib/librte_metrics.a 00:02:17.326 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.584 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.841 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.841 [152/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.841 [153/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.841 [154/705] Linking static target lib/librte_timer.a 00:02:18.099 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:18.099 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.099 [157/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:18.099 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:18.099 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:18.357 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:18.357 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:18.357 [162/705] Linking static target lib/librte_bitratestats.a 00:02:18.615 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.615 [164/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.874 [165/705] Linking static target lib/librte_ethdev.a 00:02:18.874 [166/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:18.874 [167/705] Linking static target lib/librte_bbdev.a 00:02:18.874 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:18.874 [169/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:18.874 [170/705] Linking static target lib/acl/libavx2_tmp.a 00:02:18.874 [171/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:19.132 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:19.132 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:19.132 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:19.132 [175/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.390 [176/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.390 [177/705] Linking static target lib/librte_hash.a 00:02:19.390 [178/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:19.390 [179/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.390 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:19.390 [181/705] Linking target lib/librte_eal.so.24.0 00:02:19.649 [182/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.649 [183/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:19.649 [184/705] Linking target lib/librte_ring.so.24.0 00:02:19.649 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:19.649 [186/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:19.649 [187/705] Linking target lib/librte_meter.so.24.0 00:02:19.649 [188/705] Linking target lib/librte_pci.so.24.0 00:02:19.649 [189/705] Linking target lib/librte_timer.so.24.0 00:02:19.649 [190/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:19.649 [191/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:19.649 [192/705] Linking target lib/librte_rcu.so.24.0 00:02:19.649 [193/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:19.649 [194/705] Linking target lib/librte_mempool.so.24.0 00:02:19.649 [195/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:19.649 [196/705] Linking static target lib/librte_cfgfile.a 00:02:19.649 [197/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:19.649 [198/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.907 [199/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:19.908 [200/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:19.908 [201/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.908 [202/705] Linking target lib/librte_mbuf.so.24.0 00:02:19.908 [203/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:19.908 [204/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.908 [205/705] Linking target lib/librte_net.so.24.0 00:02:19.908 [206/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:19.908 [207/705] Linking target lib/librte_bbdev.so.24.0 00:02:19.908 [208/705] Linking static target lib/librte_bpf.a 00:02:19.908 [209/705] Linking target lib/librte_cfgfile.so.24.0 00:02:19.908 [210/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:19.908 [211/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.166 [212/705] Linking target lib/librte_cmdline.so.24.0 00:02:20.166 [213/705] Linking target lib/librte_hash.so.24.0 00:02:20.166 [214/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.166 [215/705] Linking static target lib/librte_compressdev.a 00:02:20.166 [216/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.166 [217/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:20.166 [218/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.166 [219/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.424 [220/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.424 [221/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.424 [222/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.424 [223/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:20.424 [224/705] Linking target lib/librte_compressdev.so.24.0 00:02:20.424 [225/705] Linking static target lib/librte_acl.a 00:02:20.424 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.424 [227/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:20.424 [228/705] Linking static target lib/librte_distributor.a 00:02:20.681 [229/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.681 [230/705] Linking static target lib/librte_dmadev.a 00:02:20.681 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:20.681 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.681 [233/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.681 [234/705] Linking target lib/librte_distributor.so.24.0 00:02:20.681 [235/705] Linking target lib/librte_acl.so.24.0 00:02:20.939 [236/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:20.939 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:20.939 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.939 [239/705] Linking target lib/librte_dmadev.so.24.0 00:02:21.197 [240/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:21.197 [241/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.197 [242/705] Linking static target lib/librte_efd.a 00:02:21.197 [243/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:21.197 [244/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:21.197 [245/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.197 [246/705] Linking target lib/librte_efd.so.24.0 00:02:21.455 [247/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.455 [248/705] Linking static target lib/librte_cryptodev.a 00:02:21.712 [249/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.712 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:21.712 [251/705] Linking static target lib/librte_dispatcher.a 00:02:21.712 [252/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:21.712 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:21.969 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:21.969 [255/705] Linking static target lib/librte_gpudev.a 00:02:21.969 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:21.970 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.970 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:22.227 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:22.227 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:22.227 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:22.227 [262/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.227 [263/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.227 [264/705] Linking target lib/librte_cryptodev.so.24.0 00:02:22.227 [265/705] Linking target lib/librte_ethdev.so.24.0 00:02:22.485 [266/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:22.485 [267/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:22.485 [268/705] Linking target lib/librte_metrics.so.24.0 00:02:22.485 [269/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:22.485 [270/705] Linking target lib/librte_bpf.so.24.0 00:02:22.485 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:22.485 [272/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:22.485 [273/705] Linking static target lib/librte_gro.a 00:02:22.485 [274/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.485 [275/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:22.485 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:22.485 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:22.485 [278/705] Linking target lib/librte_gpudev.so.24.0 00:02:22.485 [279/705] Linking target lib/librte_bitratestats.so.24.0 00:02:22.485 [280/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:22.743 [281/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.743 [282/705] Linking target lib/librte_gro.so.24.0 00:02:22.743 [283/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:22.743 [284/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:22.743 [285/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:22.743 [286/705] Linking static target lib/librte_gso.a 00:02:22.743 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:23.001 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:23.001 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:23.001 [290/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:23.001 [291/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.001 [292/705] Linking static target lib/librte_eventdev.a 00:02:23.001 [293/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:23.001 [294/705] Linking static target lib/librte_jobstats.a 00:02:23.001 [295/705] Linking target lib/librte_gso.so.24.0 00:02:23.001 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:23.001 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:23.001 [298/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:23.001 [299/705] Linking static target lib/librte_latencystats.a 00:02:23.259 [300/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:23.259 [301/705] Linking static target lib/librte_ip_frag.a 00:02:23.259 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.259 [303/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:23.259 [304/705] Linking target lib/librte_jobstats.so.24.0 00:02:23.259 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:23.259 [306/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.259 [307/705] Linking target lib/librte_latencystats.so.24.0 00:02:23.259 [308/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.518 [309/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.518 [310/705] Linking target lib/librte_ip_frag.so.24.0 00:02:23.518 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:23.518 [312/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:23.518 [313/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.800 [314/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:23.800 [315/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.800 [316/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:23.800 [317/705] Linking static target lib/librte_pcapng.a 00:02:23.800 [318/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:23.800 [319/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:23.800 [320/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.800 [321/705] Linking static target lib/librte_lpm.a 00:02:23.800 [322/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.800 [323/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.058 [324/705] Linking target lib/librte_pcapng.so.24.0 00:02:24.058 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.059 [326/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.059 [327/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.059 [328/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.059 [329/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:24.059 [330/705] Linking target lib/librte_lpm.so.24.0 00:02:24.059 [331/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.059 [332/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:24.059 [333/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.317 [334/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.317 [335/705] Linking static target lib/librte_power.a 00:02:24.317 [336/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:24.317 [337/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.317 [338/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:24.317 [339/705] Linking static target lib/librte_regexdev.a 00:02:24.317 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:24.317 [341/705] Linking target lib/librte_eventdev.so.24.0 00:02:24.577 [342/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:24.577 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:24.577 [344/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:24.577 [345/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:24.577 [346/705] Linking static target lib/librte_rawdev.a 00:02:24.577 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:24.577 [348/705] Linking static target lib/librte_mldev.a 00:02:24.577 [349/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:24.577 [350/705] Linking target lib/librte_dispatcher.so.24.0 00:02:24.577 [351/705] Linking static target lib/librte_member.a 00:02:24.835 [352/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [353/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:24.836 [354/705] Linking target lib/librte_power.so.24.0 00:02:24.836 [355/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.836 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:24.836 [357/705] Linking target lib/librte_member.so.24.0 00:02:24.836 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.836 [359/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:24.836 [360/705] Linking target lib/librte_rawdev.so.24.0 00:02:24.836 [361/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.836 [362/705] Linking target lib/librte_regexdev.so.24.0 00:02:25.095 [363/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.095 [364/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:25.095 [365/705] Linking static target lib/librte_rib.a 00:02:25.095 [366/705] Linking static target lib/librte_reorder.a 00:02:25.095 [367/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.095 [368/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:25.095 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.095 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.095 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.095 [372/705] Linking static target lib/librte_stack.a 00:02:25.353 [373/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [374/705] Linking target lib/librte_reorder.so.24.0 00:02:25.353 [375/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [376/705] Linking target lib/librte_rib.so.24.0 00:02:25.353 [377/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [378/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:25.353 [379/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.353 [380/705] Linking static target lib/librte_security.a 00:02:25.353 [381/705] Linking target lib/librte_stack.so.24.0 00:02:25.353 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.353 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.353 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [385/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:25.353 [386/705] Linking target lib/librte_mldev.so.24.0 00:02:25.611 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.611 [388/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:25.611 [389/705] Linking static target lib/librte_sched.a 00:02:25.611 [390/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.611 [391/705] Linking target lib/librte_security.so.24.0 00:02:25.869 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.869 [393/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:25.869 [394/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.869 [395/705] Linking target lib/librte_sched.so.24.0 00:02:25.869 [396/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.126 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.126 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:26.126 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.126 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:26.384 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.384 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:26.384 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:26.384 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:26.642 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:26.642 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:26.642 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:26.642 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.901 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:26.901 [410/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:26.901 [411/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:27.160 [412/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:27.160 [413/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.160 [414/705] Linking static target lib/librte_ipsec.a 00:02:27.160 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:27.160 [416/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:27.418 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:27.418 [418/705] Linking static target lib/librte_fib.a 00:02:27.418 [419/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.418 [420/705] Linking target lib/librte_ipsec.so.24.0 00:02:27.418 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:27.418 [422/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:27.418 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:27.418 [424/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.676 [425/705] Linking target lib/librte_fib.so.24.0 00:02:27.676 [426/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:27.676 [427/705] Linking static target lib/librte_pdcp.a 00:02:27.676 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:27.676 [429/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:27.676 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:27.933 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.933 [432/705] Linking target lib/librte_pdcp.so.24.0 00:02:27.933 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:27.933 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:28.191 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:28.191 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:28.191 [437/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:28.191 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:28.449 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:28.449 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:28.449 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:28.449 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:28.449 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:28.708 [444/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:28.708 [445/705] Linking static target lib/librte_port.a 00:02:28.708 [446/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:28.708 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:28.708 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:28.708 [449/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:28.708 [450/705] Linking static target lib/librte_pdump.a 00:02:28.708 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:28.966 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.966 [453/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.966 [454/705] Linking target lib/librte_pdump.so.24.0 00:02:28.966 [455/705] Linking target lib/librte_port.so.24.0 00:02:28.966 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:29.225 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:29.225 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:29.225 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:29.225 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:29.225 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:29.484 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:29.484 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:29.484 [464/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.484 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.484 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:29.484 [467/705] Linking static target lib/librte_table.a 00:02:29.743 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:29.743 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.743 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.743 [471/705] Linking target lib/librte_table.so.24.0 00:02:30.002 [472/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:30.002 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:30.002 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:30.002 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:30.261 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:30.261 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:30.261 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:30.261 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:30.261 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:30.520 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:30.520 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:30.520 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:30.778 [484/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:30.778 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:30.778 [486/705] Linking static target lib/librte_graph.a 00:02:30.778 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:31.037 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:31.037 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.037 [490/705] Linking target lib/librte_graph.so.24.0 00:02:31.037 [491/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:31.296 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:31.296 [493/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:31.296 [494/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:31.296 [495/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:31.296 [496/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:31.581 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:31.581 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:31.581 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.581 [500/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:31.581 [501/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:31.581 [502/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:31.581 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:31.864 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.864 [505/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:31.864 [506/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.864 [507/705] Linking static target lib/librte_node.a 00:02:31.864 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.864 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.864 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.123 [511/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.123 [512/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.123 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.123 [514/705] Linking target lib/librte_node.so.24.0 00:02:32.123 [515/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.123 [516/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.123 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.123 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.123 [519/705] Linking static target drivers/librte_bus_vdev.a 00:02:32.381 [520/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.381 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.381 [522/705] Linking static target drivers/librte_bus_pci.a 00:02:32.381 [523/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.381 [524/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.381 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.381 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:32.381 [527/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:32.381 [528/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:32.639 [529/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:32.639 [530/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:32.639 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.639 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.639 [533/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.639 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:32.639 [535/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.639 [536/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:32.639 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.639 [538/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.639 [539/705] Linking static target drivers/librte_mempool_ring.a 00:02:32.897 [540/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:32.897 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:32.897 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:33.463 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:33.463 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:33.463 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:33.463 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:33.721 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:33.721 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:33.980 [549/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:33.980 [550/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:34.239 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:34.239 [552/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:34.239 [553/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:34.239 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:34.497 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:34.497 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:34.497 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:34.755 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:34.755 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:34.755 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:34.755 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:35.014 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:35.014 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:35.014 [564/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:35.272 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:35.272 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:35.272 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:35.272 [568/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:35.272 [569/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:35.530 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:35.530 [571/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:35.530 [572/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:35.530 [573/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:35.789 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:35.789 [575/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:35.789 [576/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:35.789 [577/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:35.789 [578/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:35.789 [579/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:36.048 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:36.048 [581/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:36.048 [582/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:36.048 [583/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:36.048 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:36.048 [585/705] Linking static target drivers/librte_net_i40e.a 00:02:36.306 [586/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:36.306 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:36.565 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:36.565 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.565 [590/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:36.565 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:36.565 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:36.565 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:36.823 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:36.823 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:36.823 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:37.082 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:37.082 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:37.082 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:37.082 [600/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.341 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:37.341 [602/705] Linking static target lib/librte_vhost.a 00:02:37.341 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:37.341 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:37.341 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:37.341 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:37.601 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:37.601 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:37.601 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:37.601 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:37.601 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:37.601 [612/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:37.859 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:37.859 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:37.859 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:38.116 [616/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.116 [617/705] Linking target lib/librte_vhost.so.24.0 00:02:38.116 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:38.375 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:38.633 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:38.633 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:38.633 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:38.633 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:38.633 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:38.892 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:38.892 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:38.892 [627/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:38.892 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:38.892 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:38.892 [630/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:38.892 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:39.150 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:39.150 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:39.150 [634/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:39.150 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:39.150 [636/705] Linking static target lib/librte_pipeline.a 00:02:39.150 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:39.409 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:39.409 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:39.409 [640/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:39.409 [641/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:39.409 [642/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:39.409 [643/705] Linking target app/dpdk-dumpcap 00:02:39.698 [644/705] Linking target app/dpdk-graph 00:02:39.698 [645/705] Linking target app/dpdk-pdump 00:02:39.698 [646/705] Linking target app/dpdk-test-cmdline 00:02:39.698 [647/705] Linking target app/dpdk-proc-info 00:02:39.698 [648/705] Linking target app/dpdk-test-acl 00:02:39.698 [649/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:39.698 [650/705] Linking target app/dpdk-test-compress-perf 00:02:39.957 [651/705] Linking target app/dpdk-test-crypto-perf 00:02:39.957 [652/705] Linking target app/dpdk-test-fib 00:02:39.957 [653/705] Linking target app/dpdk-test-dma-perf 00:02:39.957 [654/705] Linking target app/dpdk-test-flow-perf 00:02:39.957 [655/705] Linking target app/dpdk-test-gpudev 00:02:40.215 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:40.215 [657/705] Linking target app/dpdk-test-eventdev 00:02:40.215 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:40.215 [659/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:40.215 [660/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:40.473 [661/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:40.473 [662/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:40.473 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:40.473 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:40.473 [665/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:40.473 [666/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:40.731 [667/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:40.731 [668/705] Linking target app/dpdk-test-mldev 00:02:40.731 [669/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:40.731 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:40.989 [671/705] Linking target app/dpdk-test-bbdev 00:02:40.989 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:40.989 [673/705] Linking target app/dpdk-test-pipeline 00:02:40.989 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:41.247 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:41.247 [676/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.247 [677/705] Linking target lib/librte_pipeline.so.24.0 00:02:41.247 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:41.505 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:41.506 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:41.506 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:41.506 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:41.763 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:41.763 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:41.763 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:41.763 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:41.763 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:42.020 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:42.020 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:42.020 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.278 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:42.278 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:42.278 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:42.537 [694/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:42.537 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:42.537 [696/705] Linking target app/dpdk-test-sad 00:02:42.537 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:42.795 [698/705] Linking target app/dpdk-test-regex 00:02:42.795 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:42.795 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:43.052 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:43.052 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:43.052 [703/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:43.310 [704/705] Linking target app/dpdk-testpmd 00:02:43.310 [705/705] Linking target app/dpdk-test-security-perf 00:02:43.310 03:58:44 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:43.310 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.310 [0/1] Installing files. 00:02:43.572 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.572 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.573 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.574 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:43.575 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.576 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.577 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.578 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.578 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.578 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.579 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:43.841 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:43.841 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:43.841 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.841 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:43.841 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.842 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.843 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.844 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.845 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.845 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.845 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:43.845 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:43.845 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:43.845 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:43.845 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:43.845 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:43.845 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:43.845 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:43.845 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:43.845 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:43.845 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:43.845 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:43.845 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:43.845 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:43.845 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:43.845 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:43.845 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:43.845 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:43.845 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:43.845 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:43.845 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:43.845 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:43.845 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:43.845 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:43.845 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:43.845 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:43.845 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:43.845 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:43.845 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:43.845 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:43.845 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:43.845 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:43.845 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:43.845 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:43.845 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:43.845 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:43.845 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:43.845 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:43.845 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:43.845 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:43.845 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:43.845 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:43.845 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:43.845 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:43.845 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:43.845 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:43.845 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:43.845 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:43.845 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:43.845 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:43.845 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:43.845 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:43.845 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:43.845 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:43.845 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:43.845 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:43.845 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:43.845 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:43.845 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:43.845 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:43.845 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:43.845 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:43.845 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:43.845 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:43.845 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:43.845 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:43.845 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:43.845 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:43.845 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:43.845 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:43.845 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:43.845 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:43.845 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:43.845 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:43.845 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:43.845 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:43.845 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:43.845 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:43.845 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:43.845 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:43.845 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:43.845 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:43.845 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:43.845 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:43.845 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:43.845 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:43.845 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:43.845 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:43.845 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:43.845 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:43.845 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:43.845 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:43.845 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:43.845 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:43.845 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:43.845 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:43.845 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:43.845 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:43.845 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:43.845 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:43.845 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:43.845 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:43.845 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:43.845 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:43.846 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:43.846 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:43.846 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:43.846 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:43.846 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:43.846 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:43.846 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:43.846 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:43.846 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:43.846 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:43.846 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:43.846 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:43.846 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:43.846 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:43.846 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:43.846 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:43.846 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:43.846 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:43.846 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:43.846 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:43.846 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:43.846 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:43.846 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:43.846 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:43.846 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:43.846 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:43.846 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:43.846 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:43.846 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:43.846 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:43.846 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:44.104 03:58:45 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:44.104 03:58:45 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:44.104 03:58:45 -- common/autobuild_common.sh@203 -- $ cat 00:02:44.104 03:58:45 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.104 00:02:44.104 real 0m38.324s 00:02:44.104 user 4m25.440s 00:02:44.104 sys 0m40.241s 00:02:44.104 03:58:45 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:44.104 03:58:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.104 ************************************ 00:02:44.104 END TEST build_native_dpdk 00:02:44.104 ************************************ 00:02:44.104 03:58:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.104 03:58:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.104 03:58:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:44.104 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:44.104 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:44.104 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:44.104 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:44.363 Using 'verbs' RDMA provider 00:02:55.271 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:05.254 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:05.254 Creating mk/config.mk...done. 00:03:05.254 Creating mk/cc.flags.mk...done. 00:03:05.254 Type 'make' to build. 00:03:05.254 03:59:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:05.254 03:59:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:05.254 03:59:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:05.254 03:59:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.254 ************************************ 00:03:05.254 START TEST make 00:03:05.254 ************************************ 00:03:05.254 03:59:06 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:05.254 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:05.254 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:05.254 meson setup builddir \ 00:03:05.254 -Dwith-libaio=enabled \ 00:03:05.254 -Dwith-liburing=enabled \ 00:03:05.254 -Dwith-libvfn=disabled \ 00:03:05.254 -Dwith-spdk=false && \ 00:03:05.254 meson compile -C builddir && \ 00:03:05.254 cd -) 00:03:05.254 make[1]: Nothing to be done for 'all'. 00:03:07.154 The Meson build system 00:03:07.154 Version: 1.5.0 00:03:07.154 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:07.154 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:07.154 Build type: native build 00:03:07.154 Project name: xnvme 00:03:07.154 Project version: 0.7.3 00:03:07.154 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:07.154 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:07.154 Host machine cpu family: x86_64 00:03:07.154 Host machine cpu: x86_64 00:03:07.154 Message: host_machine.system: linux 00:03:07.154 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:07.154 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:07.154 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:07.154 Run-time dependency threads found: YES 00:03:07.154 Has header "setupapi.h" : NO 00:03:07.154 Has header "linux/blkzoned.h" : YES 00:03:07.154 Has header "linux/blkzoned.h" : YES (cached) 00:03:07.154 Has header "libaio.h" : YES 00:03:07.154 Library aio found: YES 00:03:07.154 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:07.154 Run-time dependency liburing found: YES 2.2 00:03:07.154 Dependency libvfn skipped: feature with-libvfn disabled 00:03:07.154 Run-time dependency appleframeworks found: NO (tried framework) 00:03:07.154 Run-time dependency appleframeworks found: NO (tried framework) 00:03:07.154 Configuring xnvme_config.h using configuration 00:03:07.154 Configuring xnvme.spec using configuration 00:03:07.154 Run-time dependency bash-completion found: YES 2.11 00:03:07.154 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:07.154 Program cp found: YES (/usr/bin/cp) 00:03:07.154 Has header "winsock2.h" : NO 00:03:07.154 Has header "dbghelp.h" : NO 00:03:07.154 Library rpcrt4 found: NO 00:03:07.154 Library rt found: YES 00:03:07.154 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:07.154 Found CMake: /usr/bin/cmake (3.27.7) 00:03:07.154 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:07.154 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:07.154 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:07.154 Build targets in project: 32 00:03:07.154 00:03:07.154 xnvme 0.7.3 00:03:07.154 00:03:07.154 User defined options 00:03:07.154 with-libaio : enabled 00:03:07.154 with-liburing: enabled 00:03:07.154 with-libvfn : disabled 00:03:07.154 with-spdk : false 00:03:07.154 00:03:07.154 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.412 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:07.670 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:07.670 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:07.670 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:07.670 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:07.670 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:07.670 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:07.670 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:07.670 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:07.670 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:07.670 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:07.670 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:07.670 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:07.670 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:07.670 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:07.670 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:07.670 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:07.670 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:07.670 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:07.670 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:07.670 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:07.928 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:07.928 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:07.928 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:07.928 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:07.928 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:07.928 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:07.928 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:07.928 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:07.928 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:07.928 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:07.928 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:07.928 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:07.928 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:07.928 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:07.928 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:07.928 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:07.928 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:07.928 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:07.928 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:07.928 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:07.928 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:07.928 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:07.928 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:07.928 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:07.928 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:07.928 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:07.928 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:07.928 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:07.928 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:07.928 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:07.928 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:07.928 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:07.928 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:07.928 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:07.928 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:07.928 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:07.928 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:07.928 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:08.186 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:08.186 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:08.186 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:08.186 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:08.187 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:08.187 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:08.187 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:08.187 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:08.187 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:08.187 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:08.187 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:08.187 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:08.187 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:08.187 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:08.187 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:08.187 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:08.187 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:08.187 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:08.187 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:08.187 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:08.187 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:08.187 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:08.444 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:08.444 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:08.444 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:08.444 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:08.444 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:08.444 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:08.444 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:08.444 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:08.444 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:08.444 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:08.444 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:08.444 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:08.444 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:08.444 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:08.444 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:08.444 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:08.444 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:08.444 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:08.444 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:08.444 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:08.444 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:08.444 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:08.444 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:08.444 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:08.444 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:08.444 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:08.444 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:08.444 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:08.703 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:08.703 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:08.703 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:08.703 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:08.703 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:08.703 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:08.703 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:08.703 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:08.703 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:08.703 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:08.703 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:08.703 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:08.703 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:08.703 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:08.703 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:08.703 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:08.703 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:08.703 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:08.703 [127/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:08.703 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:08.703 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:08.703 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:08.703 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:08.703 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:08.703 [133/203] Linking target lib/libxnvme.so 00:03:08.703 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:08.703 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:08.703 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:08.703 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:08.703 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:08.703 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:08.703 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:08.703 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:08.962 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:08.962 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:08.962 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:08.962 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:08.962 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:08.962 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:08.962 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:08.962 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:08.962 [150/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:08.962 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:08.962 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:08.962 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:08.962 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:08.962 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:08.962 [156/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:08.962 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:08.962 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:08.962 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:08.962 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:08.962 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:08.962 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:08.962 [163/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:08.962 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:09.221 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:09.221 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:09.221 [167/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:09.221 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:09.221 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:09.221 [170/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:09.221 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:09.221 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:09.221 [173/203] Linking static target lib/libxnvme.a 00:03:09.221 [174/203] Linking target tests/xnvme_tests_async_intf 00:03:09.221 [175/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:09.221 [176/203] Linking target tests/xnvme_tests_enum 00:03:09.221 [177/203] Linking target tests/xnvme_tests_cli 00:03:09.221 [178/203] Linking target tests/xnvme_tests_ioworker 00:03:09.221 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:09.221 [180/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:09.221 [181/203] Linking target tests/xnvme_tests_lblk 00:03:09.221 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:09.221 [183/203] Linking target tests/xnvme_tests_buf 00:03:09.221 [184/203] Linking target tests/xnvme_tests_scc 00:03:09.479 [185/203] Linking target tests/xnvme_tests_kvs 00:03:09.479 [186/203] Linking target tests/xnvme_tests_znd_append 00:03:09.479 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:09.479 [188/203] Linking target tests/xnvme_tests_map 00:03:09.479 [189/203] Linking target tools/xnvme_file 00:03:09.479 [190/203] Linking target tools/zoned 00:03:09.479 [191/203] Linking target tests/xnvme_tests_znd_state 00:03:09.479 [192/203] Linking target tools/lblk 00:03:09.479 [193/203] Linking target tools/xdd 00:03:09.479 [194/203] Linking target tools/kvs 00:03:09.479 [195/203] Linking target examples/xnvme_dev 00:03:09.479 [196/203] Linking target examples/xnvme_enum 00:03:09.479 [197/203] Linking target tools/xnvme 00:03:09.479 [198/203] Linking target examples/zoned_io_async 00:03:09.479 [199/203] Linking target examples/xnvme_single_sync 00:03:09.479 [200/203] Linking target examples/xnvme_single_async 00:03:09.479 [201/203] Linking target examples/xnvme_io_async 00:03:09.479 [202/203] Linking target examples/zoned_io_sync 00:03:09.479 [203/203] Linking target examples/xnvme_hello 00:03:09.479 INFO: autodetecting backend as ninja 00:03:09.479 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:09.479 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:21.703 CC lib/ut/ut.o 00:03:21.703 CC lib/ut_mock/mock.o 00:03:21.703 CC lib/log/log_flags.o 00:03:21.703 CC lib/log/log_deprecated.o 00:03:21.703 CC lib/log/log.o 00:03:21.703 LIB libspdk_ut_mock.a 00:03:21.703 SO libspdk_ut_mock.so.5.0 00:03:21.703 LIB libspdk_ut.a 00:03:21.703 LIB libspdk_log.a 00:03:21.703 SO libspdk_ut.so.1.0 00:03:21.703 SYMLINK libspdk_ut_mock.so 00:03:21.703 SO libspdk_log.so.6.1 00:03:21.703 SYMLINK libspdk_ut.so 00:03:21.703 SYMLINK libspdk_log.so 00:03:21.703 CC lib/dma/dma.o 00:03:21.703 CC lib/ioat/ioat.o 00:03:21.703 CC lib/util/base64.o 00:03:21.703 CC lib/util/bit_array.o 00:03:21.703 CC lib/util/crc16.o 00:03:21.703 CXX lib/trace_parser/trace.o 00:03:21.703 CC lib/util/cpuset.o 00:03:21.703 CC lib/util/crc32.o 00:03:21.703 CC lib/util/crc32c.o 00:03:21.704 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.704 CC lib/util/crc32_ieee.o 00:03:21.704 CC lib/util/crc64.o 00:03:21.704 CC lib/util/dif.o 00:03:21.704 CC lib/util/fd.o 00:03:21.704 CC lib/util/file.o 00:03:21.704 LIB libspdk_dma.a 00:03:21.704 CC lib/util/hexlify.o 00:03:21.704 SO libspdk_dma.so.3.0 00:03:21.704 CC lib/util/iov.o 00:03:21.704 CC lib/vfio_user/host/vfio_user.o 00:03:21.704 CC lib/util/math.o 00:03:21.704 SYMLINK libspdk_dma.so 00:03:21.704 CC lib/util/pipe.o 00:03:21.704 CC lib/util/strerror_tls.o 00:03:21.704 LIB libspdk_ioat.a 00:03:21.704 SO libspdk_ioat.so.6.0 00:03:21.704 CC lib/util/string.o 00:03:21.704 CC lib/util/uuid.o 00:03:21.704 SYMLINK libspdk_ioat.so 00:03:21.704 CC lib/util/fd_group.o 00:03:21.704 CC lib/util/xor.o 00:03:21.704 CC lib/util/zipf.o 00:03:21.704 LIB libspdk_vfio_user.a 00:03:21.704 SO libspdk_vfio_user.so.4.0 00:03:21.704 SYMLINK libspdk_vfio_user.so 00:03:21.704 LIB libspdk_trace_parser.a 00:03:21.704 LIB libspdk_util.a 00:03:21.704 SO libspdk_trace_parser.so.4.0 00:03:21.704 SO libspdk_util.so.8.0 00:03:21.704 SYMLINK libspdk_trace_parser.so 00:03:21.704 SYMLINK libspdk_util.so 00:03:21.962 CC lib/json/json_parse.o 00:03:21.962 CC lib/rdma/common.o 00:03:21.962 CC lib/json/json_util.o 00:03:21.962 CC lib/json/json_write.o 00:03:21.962 CC lib/rdma/rdma_verbs.o 00:03:21.962 CC lib/vmd/vmd.o 00:03:21.962 CC lib/idxd/idxd.o 00:03:21.962 CC lib/idxd/idxd_user.o 00:03:21.962 CC lib/env_dpdk/env.o 00:03:21.962 CC lib/conf/conf.o 00:03:22.220 CC lib/vmd/led.o 00:03:22.220 CC lib/idxd/idxd_kernel.o 00:03:22.220 CC lib/env_dpdk/memory.o 00:03:22.220 CC lib/env_dpdk/pci.o 00:03:22.220 LIB libspdk_conf.a 00:03:22.220 SO libspdk_conf.so.5.0 00:03:22.220 LIB libspdk_rdma.a 00:03:22.220 LIB libspdk_json.a 00:03:22.220 SO libspdk_rdma.so.5.0 00:03:22.220 SYMLINK libspdk_conf.so 00:03:22.220 CC lib/env_dpdk/init.o 00:03:22.220 CC lib/env_dpdk/threads.o 00:03:22.220 SO libspdk_json.so.5.1 00:03:22.220 SYMLINK libspdk_rdma.so 00:03:22.220 CC lib/env_dpdk/pci_ioat.o 00:03:22.220 CC lib/env_dpdk/pci_virtio.o 00:03:22.220 SYMLINK libspdk_json.so 00:03:22.220 CC lib/env_dpdk/pci_vmd.o 00:03:22.220 CC lib/env_dpdk/pci_idxd.o 00:03:22.481 CC lib/env_dpdk/pci_event.o 00:03:22.481 CC lib/env_dpdk/sigbus_handler.o 00:03:22.481 LIB libspdk_idxd.a 00:03:22.481 CC lib/env_dpdk/pci_dpdk.o 00:03:22.481 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.481 SO libspdk_idxd.so.11.0 00:03:22.481 LIB libspdk_vmd.a 00:03:22.481 SYMLINK libspdk_idxd.so 00:03:22.481 SO libspdk_vmd.so.5.0 00:03:22.481 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.481 SYMLINK libspdk_vmd.so 00:03:22.481 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.481 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.481 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.481 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.741 LIB libspdk_jsonrpc.a 00:03:22.741 SO libspdk_jsonrpc.so.5.1 00:03:23.002 SYMLINK libspdk_jsonrpc.so 00:03:23.002 CC lib/rpc/rpc.o 00:03:23.262 LIB libspdk_rpc.a 00:03:23.262 SO libspdk_rpc.so.5.0 00:03:23.262 LIB libspdk_env_dpdk.a 00:03:23.262 SYMLINK libspdk_rpc.so 00:03:23.262 SO libspdk_env_dpdk.so.13.0 00:03:23.523 CC lib/sock/sock.o 00:03:23.523 CC lib/sock/sock_rpc.o 00:03:23.523 CC lib/notify/notify_rpc.o 00:03:23.523 CC lib/notify/notify.o 00:03:23.523 CC lib/trace/trace.o 00:03:23.523 CC lib/trace/trace_rpc.o 00:03:23.523 CC lib/trace/trace_flags.o 00:03:23.523 SYMLINK libspdk_env_dpdk.so 00:03:23.523 LIB libspdk_notify.a 00:03:23.523 SO libspdk_notify.so.5.0 00:03:23.523 SYMLINK libspdk_notify.so 00:03:23.523 LIB libspdk_trace.a 00:03:23.783 SO libspdk_trace.so.9.0 00:03:23.783 SYMLINK libspdk_trace.so 00:03:23.783 LIB libspdk_sock.a 00:03:23.783 SO libspdk_sock.so.8.0 00:03:23.783 CC lib/thread/thread.o 00:03:23.783 CC lib/thread/iobuf.o 00:03:23.783 SYMLINK libspdk_sock.so 00:03:24.042 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.042 CC lib/nvme/nvme_ctrlr.o 00:03:24.042 CC lib/nvme/nvme_ns_cmd.o 00:03:24.042 CC lib/nvme/nvme_fabric.o 00:03:24.042 CC lib/nvme/nvme_pcie_common.o 00:03:24.042 CC lib/nvme/nvme_qpair.o 00:03:24.042 CC lib/nvme/nvme_pcie.o 00:03:24.042 CC lib/nvme/nvme_ns.o 00:03:24.302 CC lib/nvme/nvme.o 00:03:24.562 CC lib/nvme/nvme_quirks.o 00:03:24.562 CC lib/nvme/nvme_transport.o 00:03:24.822 CC lib/nvme/nvme_discovery.o 00:03:24.822 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.822 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.822 CC lib/nvme/nvme_tcp.o 00:03:25.081 CC lib/nvme/nvme_opal.o 00:03:25.081 CC lib/nvme/nvme_io_msg.o 00:03:25.081 CC lib/nvme/nvme_poll_group.o 00:03:25.340 CC lib/nvme/nvme_zns.o 00:03:25.340 CC lib/nvme/nvme_cuse.o 00:03:25.340 CC lib/nvme/nvme_vfio_user.o 00:03:25.340 CC lib/nvme/nvme_rdma.o 00:03:25.340 LIB libspdk_thread.a 00:03:25.340 SO libspdk_thread.so.9.0 00:03:25.340 SYMLINK libspdk_thread.so 00:03:25.600 CC lib/accel/accel.o 00:03:25.600 CC lib/blob/blobstore.o 00:03:25.600 CC lib/init/json_config.o 00:03:25.600 CC lib/init/subsystem.o 00:03:25.860 CC lib/init/subsystem_rpc.o 00:03:25.860 CC lib/init/rpc.o 00:03:25.860 CC lib/blob/request.o 00:03:25.860 CC lib/virtio/virtio.o 00:03:25.860 CC lib/virtio/virtio_vhost_user.o 00:03:25.860 LIB libspdk_init.a 00:03:25.860 SO libspdk_init.so.4.0 00:03:26.119 SYMLINK libspdk_init.so 00:03:26.119 CC lib/blob/zeroes.o 00:03:26.119 CC lib/blob/blob_bs_dev.o 00:03:26.119 CC lib/virtio/virtio_vfio_user.o 00:03:26.119 CC lib/virtio/virtio_pci.o 00:03:26.119 CC lib/accel/accel_rpc.o 00:03:26.119 CC lib/accel/accel_sw.o 00:03:26.119 CC lib/event/app.o 00:03:26.119 CC lib/event/reactor.o 00:03:26.119 CC lib/event/log_rpc.o 00:03:26.378 CC lib/event/app_rpc.o 00:03:26.378 CC lib/event/scheduler_static.o 00:03:26.378 LIB libspdk_virtio.a 00:03:26.378 SO libspdk_virtio.so.6.0 00:03:26.378 SYMLINK libspdk_virtio.so 00:03:26.639 LIB libspdk_accel.a 00:03:26.639 LIB libspdk_nvme.a 00:03:26.639 SO libspdk_accel.so.14.0 00:03:26.639 LIB libspdk_event.a 00:03:26.639 SYMLINK libspdk_accel.so 00:03:26.639 SO libspdk_event.so.12.0 00:03:26.639 SO libspdk_nvme.so.12.0 00:03:26.899 SYMLINK libspdk_event.so 00:03:26.899 CC lib/bdev/bdev.o 00:03:26.899 CC lib/bdev/bdev_zone.o 00:03:26.899 CC lib/bdev/bdev_rpc.o 00:03:26.899 CC lib/bdev/part.o 00:03:26.899 CC lib/bdev/scsi_nvme.o 00:03:26.899 SYMLINK libspdk_nvme.so 00:03:28.816 LIB libspdk_blob.a 00:03:28.816 SO libspdk_blob.so.10.1 00:03:28.816 SYMLINK libspdk_blob.so 00:03:28.816 CC lib/blobfs/tree.o 00:03:28.816 CC lib/blobfs/blobfs.o 00:03:28.816 CC lib/lvol/lvol.o 00:03:29.389 LIB libspdk_bdev.a 00:03:29.650 SO libspdk_bdev.so.14.0 00:03:29.650 SYMLINK libspdk_bdev.so 00:03:29.650 LIB libspdk_blobfs.a 00:03:29.650 SO libspdk_blobfs.so.9.0 00:03:29.650 LIB libspdk_lvol.a 00:03:29.650 CC lib/nvmf/ctrlr.o 00:03:29.650 CC lib/nvmf/ctrlr_discovery.o 00:03:29.650 CC lib/scsi/lun.o 00:03:29.650 CC lib/nvmf/ctrlr_bdev.o 00:03:29.650 CC lib/scsi/dev.o 00:03:29.650 CC lib/ftl/ftl_core.o 00:03:29.650 CC lib/ublk/ublk.o 00:03:29.650 CC lib/nbd/nbd.o 00:03:29.910 SO libspdk_lvol.so.9.1 00:03:29.910 SYMLINK libspdk_blobfs.so 00:03:29.910 CC lib/nbd/nbd_rpc.o 00:03:29.910 SYMLINK libspdk_lvol.so 00:03:29.910 CC lib/ftl/ftl_init.o 00:03:29.910 CC lib/ftl/ftl_layout.o 00:03:29.910 CC lib/ftl/ftl_debug.o 00:03:29.910 CC lib/scsi/port.o 00:03:29.910 CC lib/nvmf/subsystem.o 00:03:30.171 CC lib/ftl/ftl_io.o 00:03:30.171 CC lib/scsi/scsi.o 00:03:30.171 LIB libspdk_nbd.a 00:03:30.171 CC lib/ublk/ublk_rpc.o 00:03:30.171 SO libspdk_nbd.so.6.0 00:03:30.171 CC lib/ftl/ftl_sb.o 00:03:30.171 SYMLINK libspdk_nbd.so 00:03:30.171 CC lib/ftl/ftl_l2p.o 00:03:30.171 CC lib/ftl/ftl_l2p_flat.o 00:03:30.171 CC lib/scsi/scsi_bdev.o 00:03:30.431 CC lib/scsi/scsi_pr.o 00:03:30.431 CC lib/scsi/scsi_rpc.o 00:03:30.431 LIB libspdk_ublk.a 00:03:30.431 CC lib/scsi/task.o 00:03:30.431 SO libspdk_ublk.so.2.0 00:03:30.431 CC lib/nvmf/nvmf.o 00:03:30.431 CC lib/nvmf/nvmf_rpc.o 00:03:30.431 CC lib/ftl/ftl_nv_cache.o 00:03:30.431 SYMLINK libspdk_ublk.so 00:03:30.431 CC lib/ftl/ftl_band.o 00:03:30.431 CC lib/ftl/ftl_band_ops.o 00:03:30.431 CC lib/ftl/ftl_writer.o 00:03:30.692 CC lib/nvmf/transport.o 00:03:30.692 LIB libspdk_scsi.a 00:03:30.692 CC lib/ftl/ftl_rq.o 00:03:30.692 CC lib/ftl/ftl_reloc.o 00:03:30.692 SO libspdk_scsi.so.8.0 00:03:30.952 CC lib/nvmf/tcp.o 00:03:30.952 SYMLINK libspdk_scsi.so 00:03:30.952 CC lib/nvmf/rdma.o 00:03:30.952 CC lib/ftl/ftl_l2p_cache.o 00:03:31.213 CC lib/ftl/ftl_p2l.o 00:03:31.213 CC lib/iscsi/conn.o 00:03:31.213 CC lib/iscsi/init_grp.o 00:03:31.213 CC lib/vhost/vhost.o 00:03:31.473 CC lib/vhost/vhost_rpc.o 00:03:31.473 CC lib/vhost/vhost_scsi.o 00:03:31.473 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.473 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.473 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.473 CC lib/iscsi/iscsi.o 00:03:31.473 CC lib/iscsi/md5.o 00:03:31.473 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.733 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.733 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.733 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.733 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.733 CC lib/vhost/vhost_blk.o 00:03:31.993 CC lib/vhost/rte_vhost_user.o 00:03:31.993 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.993 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.993 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.993 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.993 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.993 CC lib/ftl/utils/ftl_conf.o 00:03:32.252 CC lib/iscsi/param.o 00:03:32.252 CC lib/ftl/utils/ftl_md.o 00:03:32.252 CC lib/ftl/utils/ftl_mempool.o 00:03:32.252 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.510 CC lib/ftl/utils/ftl_property.o 00:03:32.510 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.510 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.510 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.510 CC lib/iscsi/portal_grp.o 00:03:32.510 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.768 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.768 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.768 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.768 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.768 CC lib/iscsi/tgt_node.o 00:03:32.768 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.768 LIB libspdk_vhost.a 00:03:32.768 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.768 SO libspdk_vhost.so.7.1 00:03:32.768 CC lib/iscsi/iscsi_subsystem.o 00:03:32.768 CC lib/iscsi/iscsi_rpc.o 00:03:32.768 CC lib/iscsi/task.o 00:03:32.768 CC lib/ftl/base/ftl_base_dev.o 00:03:32.768 LIB libspdk_nvmf.a 00:03:32.768 SYMLINK libspdk_vhost.so 00:03:32.768 CC lib/ftl/base/ftl_base_bdev.o 00:03:33.026 CC lib/ftl/ftl_trace.o 00:03:33.026 SO libspdk_nvmf.so.17.0 00:03:33.026 LIB libspdk_ftl.a 00:03:33.283 SYMLINK libspdk_nvmf.so 00:03:33.283 LIB libspdk_iscsi.a 00:03:33.283 SO libspdk_ftl.so.8.0 00:03:33.283 SO libspdk_iscsi.so.7.0 00:03:33.541 SYMLINK libspdk_ftl.so 00:03:33.541 SYMLINK libspdk_iscsi.so 00:03:33.541 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.799 CC module/blob/bdev/blob_bdev.o 00:03:33.799 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.799 CC module/sock/posix/posix.o 00:03:33.799 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.799 CC module/accel/ioat/accel_ioat.o 00:03:33.799 CC module/accel/error/accel_error.o 00:03:33.799 CC module/accel/dsa/accel_dsa.o 00:03:33.799 CC module/accel/iaa/accel_iaa.o 00:03:33.799 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.799 LIB libspdk_env_dpdk_rpc.a 00:03:33.799 SO libspdk_env_dpdk_rpc.so.5.0 00:03:33.799 LIB libspdk_scheduler_gscheduler.a 00:03:33.799 SYMLINK libspdk_env_dpdk_rpc.so 00:03:33.799 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.799 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.799 CC module/accel/error/accel_error_rpc.o 00:03:33.799 SO libspdk_scheduler_gscheduler.so.3.0 00:03:33.799 LIB libspdk_scheduler_dpdk_governor.a 00:03:33.799 LIB libspdk_scheduler_dynamic.a 00:03:33.799 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.799 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:33.799 SO libspdk_scheduler_dynamic.so.3.0 00:03:33.799 SYMLINK libspdk_scheduler_gscheduler.so 00:03:33.799 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.057 LIB libspdk_blob_bdev.a 00:03:34.057 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.057 LIB libspdk_accel_iaa.a 00:03:34.057 SO libspdk_blob_bdev.so.10.1 00:03:34.057 LIB libspdk_accel_error.a 00:03:34.057 SO libspdk_accel_iaa.so.2.0 00:03:34.057 LIB libspdk_accel_ioat.a 00:03:34.057 SO libspdk_accel_error.so.1.0 00:03:34.057 SYMLINK libspdk_blob_bdev.so 00:03:34.057 LIB libspdk_accel_dsa.a 00:03:34.057 SO libspdk_accel_ioat.so.5.0 00:03:34.057 SYMLINK libspdk_accel_iaa.so 00:03:34.057 SO libspdk_accel_dsa.so.4.0 00:03:34.057 SYMLINK libspdk_accel_error.so 00:03:34.057 SYMLINK libspdk_accel_ioat.so 00:03:34.057 SYMLINK libspdk_accel_dsa.so 00:03:34.057 CC module/bdev/gpt/gpt.o 00:03:34.057 CC module/bdev/delay/vbdev_delay.o 00:03:34.057 CC module/bdev/malloc/bdev_malloc.o 00:03:34.057 CC module/bdev/nvme/bdev_nvme.o 00:03:34.057 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.057 CC module/bdev/error/vbdev_error.o 00:03:34.057 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.057 CC module/bdev/null/bdev_null.o 00:03:34.057 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.315 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.315 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.315 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.315 CC module/bdev/null/bdev_null_rpc.o 00:03:34.315 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.315 LIB libspdk_bdev_error.a 00:03:34.315 LIB libspdk_blobfs_bdev.a 00:03:34.315 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.572 LIB libspdk_sock_posix.a 00:03:34.572 SO libspdk_bdev_error.so.5.0 00:03:34.572 SO libspdk_blobfs_bdev.so.5.0 00:03:34.572 SO libspdk_sock_posix.so.5.0 00:03:34.572 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.572 SYMLINK libspdk_bdev_error.so 00:03:34.572 LIB libspdk_bdev_delay.a 00:03:34.572 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.572 SYMLINK libspdk_blobfs_bdev.so 00:03:34.572 CC module/bdev/nvme/nvme_rpc.o 00:03:34.572 SO libspdk_bdev_delay.so.5.0 00:03:34.572 LIB libspdk_bdev_null.a 00:03:34.572 SYMLINK libspdk_sock_posix.so 00:03:34.572 LIB libspdk_bdev_gpt.a 00:03:34.572 SO libspdk_bdev_null.so.5.0 00:03:34.572 SYMLINK libspdk_bdev_delay.so 00:03:34.572 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.572 CC module/bdev/nvme/vbdev_opal.o 00:03:34.572 SO libspdk_bdev_gpt.so.5.0 00:03:34.572 LIB libspdk_bdev_passthru.a 00:03:34.572 SYMLINK libspdk_bdev_null.so 00:03:34.572 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.572 SO libspdk_bdev_passthru.so.5.0 00:03:34.572 LIB libspdk_bdev_malloc.a 00:03:34.572 SYMLINK libspdk_bdev_gpt.so 00:03:34.572 SO libspdk_bdev_malloc.so.5.0 00:03:34.572 SYMLINK libspdk_bdev_passthru.so 00:03:34.572 CC module/bdev/raid/bdev_raid.o 00:03:34.830 SYMLINK libspdk_bdev_malloc.so 00:03:34.830 CC module/bdev/split/vbdev_split.o 00:03:34.830 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.830 CC module/bdev/xnvme/bdev_xnvme.o 00:03:34.830 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.830 CC module/bdev/aio/bdev_aio.o 00:03:34.830 CC module/bdev/ftl/bdev_ftl.o 00:03:34.830 LIB libspdk_bdev_lvol.a 00:03:34.830 LIB libspdk_bdev_split.a 00:03:34.830 SO libspdk_bdev_lvol.so.5.0 00:03:34.830 SO libspdk_bdev_split.so.5.0 00:03:35.088 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.088 SYMLINK libspdk_bdev_lvol.so 00:03:35.088 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.088 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:35.088 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.088 SYMLINK libspdk_bdev_split.so 00:03:35.088 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.088 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.088 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.088 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.088 LIB libspdk_bdev_xnvme.a 00:03:35.088 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.088 LIB libspdk_bdev_zone_block.a 00:03:35.088 SO libspdk_bdev_xnvme.so.2.0 00:03:35.088 LIB libspdk_bdev_aio.a 00:03:35.088 SO libspdk_bdev_zone_block.so.5.0 00:03:35.088 SO libspdk_bdev_aio.so.5.0 00:03:35.347 SYMLINK libspdk_bdev_xnvme.so 00:03:35.347 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.347 SYMLINK libspdk_bdev_aio.so 00:03:35.347 SYMLINK libspdk_bdev_zone_block.so 00:03:35.347 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.347 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.347 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.347 LIB libspdk_bdev_iscsi.a 00:03:35.347 LIB libspdk_bdev_ftl.a 00:03:35.347 SO libspdk_bdev_iscsi.so.5.0 00:03:35.347 SO libspdk_bdev_ftl.so.5.0 00:03:35.347 CC module/bdev/raid/raid0.o 00:03:35.347 CC module/bdev/raid/raid1.o 00:03:35.347 SYMLINK libspdk_bdev_iscsi.so 00:03:35.347 CC module/bdev/raid/concat.o 00:03:35.347 SYMLINK libspdk_bdev_ftl.so 00:03:35.605 LIB libspdk_bdev_virtio.a 00:03:35.605 SO libspdk_bdev_virtio.so.5.0 00:03:35.605 LIB libspdk_bdev_raid.a 00:03:35.605 SO libspdk_bdev_raid.so.5.0 00:03:35.605 SYMLINK libspdk_bdev_virtio.so 00:03:35.605 SYMLINK libspdk_bdev_raid.so 00:03:35.949 LIB libspdk_bdev_nvme.a 00:03:35.949 SO libspdk_bdev_nvme.so.6.0 00:03:36.207 SYMLINK libspdk_bdev_nvme.so 00:03:36.467 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.467 CC module/event/subsystems/sock/sock.o 00:03:36.467 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.467 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.467 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.467 CC module/event/subsystems/vmd/vmd.o 00:03:36.467 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.467 LIB libspdk_event_sock.a 00:03:36.467 LIB libspdk_event_scheduler.a 00:03:36.467 LIB libspdk_event_vhost_blk.a 00:03:36.467 LIB libspdk_event_iobuf.a 00:03:36.467 LIB libspdk_event_vmd.a 00:03:36.467 SO libspdk_event_sock.so.4.0 00:03:36.467 SO libspdk_event_vhost_blk.so.2.0 00:03:36.467 SO libspdk_event_scheduler.so.3.0 00:03:36.467 SO libspdk_event_vmd.so.5.0 00:03:36.467 SO libspdk_event_iobuf.so.2.0 00:03:36.726 SYMLINK libspdk_event_sock.so 00:03:36.726 SYMLINK libspdk_event_vhost_blk.so 00:03:36.726 SYMLINK libspdk_event_scheduler.so 00:03:36.726 SYMLINK libspdk_event_vmd.so 00:03:36.726 SYMLINK libspdk_event_iobuf.so 00:03:36.726 CC module/event/subsystems/accel/accel.o 00:03:36.983 LIB libspdk_event_accel.a 00:03:36.983 SO libspdk_event_accel.so.5.0 00:03:36.983 SYMLINK libspdk_event_accel.so 00:03:37.243 CC module/event/subsystems/bdev/bdev.o 00:03:37.243 LIB libspdk_event_bdev.a 00:03:37.243 SO libspdk_event_bdev.so.5.0 00:03:37.243 SYMLINK libspdk_event_bdev.so 00:03:37.503 CC module/event/subsystems/nbd/nbd.o 00:03:37.503 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.503 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.503 CC module/event/subsystems/scsi/scsi.o 00:03:37.503 CC module/event/subsystems/ublk/ublk.o 00:03:37.503 LIB libspdk_event_nbd.a 00:03:37.503 LIB libspdk_event_ublk.a 00:03:37.503 LIB libspdk_event_scsi.a 00:03:37.503 SO libspdk_event_nbd.so.5.0 00:03:37.503 SO libspdk_event_ublk.so.2.0 00:03:37.503 SO libspdk_event_scsi.so.5.0 00:03:37.503 SYMLINK libspdk_event_nbd.so 00:03:37.763 SYMLINK libspdk_event_ublk.so 00:03:37.763 LIB libspdk_event_nvmf.a 00:03:37.763 SYMLINK libspdk_event_scsi.so 00:03:37.763 SO libspdk_event_nvmf.so.5.0 00:03:37.763 SYMLINK libspdk_event_nvmf.so 00:03:37.763 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.763 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.022 LIB libspdk_event_vhost_scsi.a 00:03:38.022 LIB libspdk_event_iscsi.a 00:03:38.022 SO libspdk_event_iscsi.so.5.0 00:03:38.022 SO libspdk_event_vhost_scsi.so.2.0 00:03:38.022 SYMLINK libspdk_event_iscsi.so 00:03:38.022 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.022 SO libspdk.so.5.0 00:03:38.022 SYMLINK libspdk.so 00:03:38.280 CXX app/trace/trace.o 00:03:38.280 CC app/trace_record/trace_record.o 00:03:38.280 CC app/nvmf_tgt/nvmf_main.o 00:03:38.280 CC examples/accel/perf/accel_perf.o 00:03:38.280 CC examples/ioat/perf/perf.o 00:03:38.280 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.280 CC test/app/bdev_svc/bdev_svc.o 00:03:38.280 CC examples/blob/hello_world/hello_blob.o 00:03:38.280 CC test/accel/dif/dif.o 00:03:38.280 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.280 LINK nvmf_tgt 00:03:38.539 LINK spdk_trace_record 00:03:38.539 LINK bdev_svc 00:03:38.539 LINK iscsi_tgt 00:03:38.539 LINK hello_blob 00:03:38.539 LINK ioat_perf 00:03:38.539 LINK spdk_trace 00:03:38.539 LINK hello_bdev 00:03:38.539 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.539 LINK dif 00:03:38.539 CC examples/nvme/hello_world/hello_world.o 00:03:38.539 CC examples/ioat/verify/verify.o 00:03:38.798 CC examples/sock/hello_world/hello_sock.o 00:03:38.798 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.798 CC examples/blob/cli/blobcli.o 00:03:38.798 LINK accel_perf 00:03:38.798 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.798 CC app/spdk_tgt/spdk_tgt.o 00:03:38.798 LINK hello_world 00:03:38.798 LINK verify 00:03:38.798 LINK lsvmd 00:03:38.798 LINK hello_sock 00:03:38.798 CC examples/nvmf/nvmf/nvmf.o 00:03:38.798 CC app/spdk_lspci/spdk_lspci.o 00:03:38.798 CC examples/nvme/reconnect/reconnect.o 00:03:39.056 LINK spdk_tgt 00:03:39.056 LINK spdk_lspci 00:03:39.056 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.056 CC examples/vmd/led/led.o 00:03:39.056 CC test/bdev/bdevio/bdevio.o 00:03:39.056 LINK nvme_fuzz 00:03:39.056 LINK nvmf 00:03:39.056 LINK blobcli 00:03:39.056 LINK led 00:03:39.056 CC app/spdk_nvme_perf/perf.o 00:03:39.315 LINK bdevperf 00:03:39.315 CC examples/util/zipf/zipf.o 00:03:39.315 LINK reconnect 00:03:39.315 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.315 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.315 CC app/spdk_nvme_identify/identify.o 00:03:39.315 LINK zipf 00:03:39.315 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.315 LINK nvme_manage 00:03:39.315 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.573 LINK bdevio 00:03:39.573 CC examples/thread/thread/thread_ex.o 00:03:39.573 CC app/spdk_top/spdk_top.o 00:03:39.573 CC app/vhost/vhost.o 00:03:39.573 CC examples/nvme/arbitration/arbitration.o 00:03:39.573 LINK spdk_nvme_discover 00:03:39.573 CC examples/nvme/hotplug/hotplug.o 00:03:39.573 LINK thread 00:03:39.830 LINK vhost 00:03:39.830 LINK vhost_fuzz 00:03:39.830 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:39.830 LINK hotplug 00:03:39.830 LINK arbitration 00:03:39.830 CC app/spdk_dd/spdk_dd.o 00:03:39.830 CC examples/idxd/perf/perf.o 00:03:40.087 LINK cmb_copy 00:03:40.087 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.087 LINK spdk_nvme_perf 00:03:40.087 LINK spdk_nvme_identify 00:03:40.087 CC examples/nvme/abort/abort.o 00:03:40.087 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.087 LINK interrupt_tgt 00:03:40.087 CC test/app/histogram_perf/histogram_perf.o 00:03:40.087 CC test/app/jsoncat/jsoncat.o 00:03:40.087 CC app/fio/nvme/fio_plugin.o 00:03:40.087 LINK spdk_dd 00:03:40.345 LINK histogram_perf 00:03:40.345 LINK pmr_persistence 00:03:40.345 LINK idxd_perf 00:03:40.345 LINK jsoncat 00:03:40.345 CC test/app/stub/stub.o 00:03:40.345 LINK spdk_top 00:03:40.345 TEST_HEADER include/spdk/accel.h 00:03:40.345 LINK abort 00:03:40.345 TEST_HEADER include/spdk/accel_module.h 00:03:40.345 TEST_HEADER include/spdk/assert.h 00:03:40.345 TEST_HEADER include/spdk/barrier.h 00:03:40.345 TEST_HEADER include/spdk/base64.h 00:03:40.345 CC app/fio/bdev/fio_plugin.o 00:03:40.345 TEST_HEADER include/spdk/bdev.h 00:03:40.345 TEST_HEADER include/spdk/bdev_module.h 00:03:40.345 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.345 TEST_HEADER include/spdk/bit_array.h 00:03:40.345 TEST_HEADER include/spdk/bit_pool.h 00:03:40.345 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.345 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.345 TEST_HEADER include/spdk/blobfs.h 00:03:40.345 TEST_HEADER include/spdk/blob.h 00:03:40.345 TEST_HEADER include/spdk/conf.h 00:03:40.345 TEST_HEADER include/spdk/config.h 00:03:40.603 TEST_HEADER include/spdk/cpuset.h 00:03:40.603 TEST_HEADER include/spdk/crc16.h 00:03:40.603 TEST_HEADER include/spdk/crc32.h 00:03:40.603 TEST_HEADER include/spdk/crc64.h 00:03:40.603 TEST_HEADER include/spdk/dif.h 00:03:40.603 TEST_HEADER include/spdk/dma.h 00:03:40.603 TEST_HEADER include/spdk/endian.h 00:03:40.603 LINK stub 00:03:40.603 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.603 TEST_HEADER include/spdk/env.h 00:03:40.603 TEST_HEADER include/spdk/event.h 00:03:40.603 TEST_HEADER include/spdk/fd_group.h 00:03:40.603 TEST_HEADER include/spdk/fd.h 00:03:40.603 TEST_HEADER include/spdk/file.h 00:03:40.603 TEST_HEADER include/spdk/ftl.h 00:03:40.603 CC test/blobfs/mkfs/mkfs.o 00:03:40.603 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.603 TEST_HEADER include/spdk/hexlify.h 00:03:40.603 TEST_HEADER include/spdk/histogram_data.h 00:03:40.603 TEST_HEADER include/spdk/idxd.h 00:03:40.603 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.603 TEST_HEADER include/spdk/init.h 00:03:40.603 TEST_HEADER include/spdk/ioat.h 00:03:40.603 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.603 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.603 TEST_HEADER include/spdk/json.h 00:03:40.603 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.603 TEST_HEADER include/spdk/likely.h 00:03:40.603 TEST_HEADER include/spdk/log.h 00:03:40.603 TEST_HEADER include/spdk/lvol.h 00:03:40.603 TEST_HEADER include/spdk/memory.h 00:03:40.603 TEST_HEADER include/spdk/mmio.h 00:03:40.603 TEST_HEADER include/spdk/nbd.h 00:03:40.603 TEST_HEADER include/spdk/notify.h 00:03:40.603 TEST_HEADER include/spdk/nvme.h 00:03:40.603 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.603 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.603 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.603 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.603 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.603 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.603 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.603 TEST_HEADER include/spdk/nvmf.h 00:03:40.603 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.603 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.603 TEST_HEADER include/spdk/opal.h 00:03:40.603 TEST_HEADER include/spdk/opal_spec.h 00:03:40.603 TEST_HEADER include/spdk/pci_ids.h 00:03:40.603 CC test/dma/test_dma/test_dma.o 00:03:40.603 TEST_HEADER include/spdk/pipe.h 00:03:40.603 TEST_HEADER include/spdk/queue.h 00:03:40.603 TEST_HEADER include/spdk/reduce.h 00:03:40.603 TEST_HEADER include/spdk/rpc.h 00:03:40.603 TEST_HEADER include/spdk/scheduler.h 00:03:40.603 TEST_HEADER include/spdk/scsi.h 00:03:40.603 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.603 TEST_HEADER include/spdk/sock.h 00:03:40.603 TEST_HEADER include/spdk/stdinc.h 00:03:40.603 TEST_HEADER include/spdk/string.h 00:03:40.603 TEST_HEADER include/spdk/thread.h 00:03:40.603 TEST_HEADER include/spdk/trace.h 00:03:40.603 TEST_HEADER include/spdk/trace_parser.h 00:03:40.603 TEST_HEADER include/spdk/tree.h 00:03:40.603 TEST_HEADER include/spdk/ublk.h 00:03:40.603 TEST_HEADER include/spdk/util.h 00:03:40.603 TEST_HEADER include/spdk/uuid.h 00:03:40.603 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.603 TEST_HEADER include/spdk/version.h 00:03:40.603 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.603 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.603 TEST_HEADER include/spdk/vhost.h 00:03:40.603 TEST_HEADER include/spdk/vmd.h 00:03:40.603 TEST_HEADER include/spdk/xor.h 00:03:40.603 TEST_HEADER include/spdk/zipf.h 00:03:40.603 CXX test/cpp_headers/accel.o 00:03:40.603 CC test/event/event_perf/event_perf.o 00:03:40.603 CXX test/cpp_headers/accel_module.o 00:03:40.603 LINK mkfs 00:03:40.603 CC test/lvol/esnap/esnap.o 00:03:40.603 LINK spdk_nvme 00:03:40.861 LINK event_perf 00:03:40.861 CXX test/cpp_headers/assert.o 00:03:40.861 CC test/event/reactor/reactor.o 00:03:40.861 CC test/event/reactor_perf/reactor_perf.o 00:03:40.861 CC test/env/vtophys/vtophys.o 00:03:40.861 LINK spdk_bdev 00:03:40.861 CXX test/cpp_headers/barrier.o 00:03:40.861 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.861 LINK reactor 00:03:40.861 LINK test_dma 00:03:40.861 CXX test/cpp_headers/base64.o 00:03:40.861 LINK reactor_perf 00:03:40.861 LINK mem_callbacks 00:03:41.119 LINK vtophys 00:03:41.119 LINK iscsi_fuzz 00:03:41.119 CC test/env/memory/memory_ut.o 00:03:41.119 LINK env_dpdk_post_init 00:03:41.119 CXX test/cpp_headers/bdev.o 00:03:41.119 CC test/env/pci/pci_ut.o 00:03:41.119 CXX test/cpp_headers/bdev_module.o 00:03:41.119 CC test/event/app_repeat/app_repeat.o 00:03:41.119 CC test/event/scheduler/scheduler.o 00:03:41.119 CXX test/cpp_headers/bdev_zone.o 00:03:41.119 CC test/rpc_client/rpc_client_test.o 00:03:41.119 CC test/nvme/aer/aer.o 00:03:41.119 LINK app_repeat 00:03:41.376 CC test/thread/poller_perf/poller_perf.o 00:03:41.376 CC test/nvme/reset/reset.o 00:03:41.376 LINK rpc_client_test 00:03:41.376 CXX test/cpp_headers/bit_array.o 00:03:41.376 LINK scheduler 00:03:41.376 CC test/nvme/sgl/sgl.o 00:03:41.376 LINK poller_perf 00:03:41.376 CC test/nvme/e2edp/nvme_dp.o 00:03:41.376 CXX test/cpp_headers/bit_pool.o 00:03:41.376 LINK aer 00:03:41.376 CXX test/cpp_headers/blob_bdev.o 00:03:41.376 LINK pci_ut 00:03:41.634 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.634 LINK reset 00:03:41.634 CXX test/cpp_headers/blobfs.o 00:03:41.634 CXX test/cpp_headers/blob.o 00:03:41.634 CXX test/cpp_headers/conf.o 00:03:41.634 LINK sgl 00:03:41.634 LINK nvme_dp 00:03:41.634 CXX test/cpp_headers/config.o 00:03:41.634 CC test/nvme/overhead/overhead.o 00:03:41.634 CXX test/cpp_headers/cpuset.o 00:03:41.634 CXX test/cpp_headers/crc16.o 00:03:41.634 CC test/nvme/err_injection/err_injection.o 00:03:41.634 CXX test/cpp_headers/crc32.o 00:03:41.634 CXX test/cpp_headers/crc64.o 00:03:41.892 LINK memory_ut 00:03:41.892 CC test/nvme/startup/startup.o 00:03:41.892 CXX test/cpp_headers/dif.o 00:03:41.892 CC test/nvme/reserve/reserve.o 00:03:41.892 CXX test/cpp_headers/dma.o 00:03:41.892 CXX test/cpp_headers/endian.o 00:03:41.892 LINK err_injection 00:03:41.892 CC test/nvme/simple_copy/simple_copy.o 00:03:41.892 LINK startup 00:03:41.892 CXX test/cpp_headers/env_dpdk.o 00:03:41.892 CXX test/cpp_headers/env.o 00:03:41.892 CXX test/cpp_headers/event.o 00:03:41.892 LINK reserve 00:03:41.892 LINK overhead 00:03:41.892 CXX test/cpp_headers/fd_group.o 00:03:41.892 CXX test/cpp_headers/fd.o 00:03:42.150 CXX test/cpp_headers/file.o 00:03:42.150 CC test/nvme/connect_stress/connect_stress.o 00:03:42.150 CXX test/cpp_headers/ftl.o 00:03:42.150 CXX test/cpp_headers/gpt_spec.o 00:03:42.150 CXX test/cpp_headers/hexlify.o 00:03:42.150 LINK simple_copy 00:03:42.150 CC test/nvme/boot_partition/boot_partition.o 00:03:42.150 CXX test/cpp_headers/histogram_data.o 00:03:42.150 CC test/nvme/compliance/nvme_compliance.o 00:03:42.150 LINK connect_stress 00:03:42.150 CXX test/cpp_headers/idxd.o 00:03:42.150 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.150 CXX test/cpp_headers/idxd_spec.o 00:03:42.151 LINK boot_partition 00:03:42.151 CXX test/cpp_headers/init.o 00:03:42.409 CXX test/cpp_headers/ioat.o 00:03:42.409 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.409 CXX test/cpp_headers/ioat_spec.o 00:03:42.409 CXX test/cpp_headers/iscsi_spec.o 00:03:42.409 CXX test/cpp_headers/json.o 00:03:42.409 CXX test/cpp_headers/jsonrpc.o 00:03:42.409 CC test/nvme/fdp/fdp.o 00:03:42.409 LINK fused_ordering 00:03:42.409 CXX test/cpp_headers/likely.o 00:03:42.409 CXX test/cpp_headers/log.o 00:03:42.409 CXX test/cpp_headers/lvol.o 00:03:42.409 CXX test/cpp_headers/memory.o 00:03:42.409 CXX test/cpp_headers/mmio.o 00:03:42.409 LINK doorbell_aers 00:03:42.409 LINK nvme_compliance 00:03:42.667 CXX test/cpp_headers/nbd.o 00:03:42.667 CC test/nvme/cuse/cuse.o 00:03:42.667 CXX test/cpp_headers/notify.o 00:03:42.667 CXX test/cpp_headers/nvme.o 00:03:42.667 CXX test/cpp_headers/nvme_intel.o 00:03:42.667 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.667 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.667 LINK fdp 00:03:42.667 CXX test/cpp_headers/nvme_spec.o 00:03:42.667 CXX test/cpp_headers/nvme_zns.o 00:03:42.667 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.667 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.667 CXX test/cpp_headers/nvmf.o 00:03:42.667 CXX test/cpp_headers/nvmf_spec.o 00:03:42.667 CXX test/cpp_headers/nvmf_transport.o 00:03:42.667 CXX test/cpp_headers/opal.o 00:03:42.667 CXX test/cpp_headers/opal_spec.o 00:03:42.667 CXX test/cpp_headers/pci_ids.o 00:03:42.926 CXX test/cpp_headers/pipe.o 00:03:42.926 CXX test/cpp_headers/queue.o 00:03:42.926 CXX test/cpp_headers/reduce.o 00:03:42.926 CXX test/cpp_headers/rpc.o 00:03:42.926 CXX test/cpp_headers/scheduler.o 00:03:42.926 CXX test/cpp_headers/scsi.o 00:03:42.926 CXX test/cpp_headers/scsi_spec.o 00:03:42.926 CXX test/cpp_headers/sock.o 00:03:42.926 CXX test/cpp_headers/stdinc.o 00:03:42.926 CXX test/cpp_headers/string.o 00:03:42.926 CXX test/cpp_headers/thread.o 00:03:42.926 CXX test/cpp_headers/trace.o 00:03:42.926 CXX test/cpp_headers/trace_parser.o 00:03:42.926 CXX test/cpp_headers/tree.o 00:03:42.926 CXX test/cpp_headers/ublk.o 00:03:42.926 CXX test/cpp_headers/util.o 00:03:42.926 CXX test/cpp_headers/uuid.o 00:03:42.926 CXX test/cpp_headers/version.o 00:03:43.184 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.184 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.184 CXX test/cpp_headers/vhost.o 00:03:43.184 CXX test/cpp_headers/vmd.o 00:03:43.184 CXX test/cpp_headers/xor.o 00:03:43.184 CXX test/cpp_headers/zipf.o 00:03:43.442 LINK cuse 00:03:44.816 LINK esnap 00:03:45.075 00:03:45.075 real 0m40.119s 00:03:45.075 user 3m50.476s 00:03:45.075 sys 0m44.559s 00:03:45.075 03:59:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:45.075 03:59:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.075 ************************************ 00:03:45.075 END TEST make 00:03:45.075 ************************************ 00:03:45.075 03:59:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:45.075 03:59:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:45.075 03:59:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:45.333 03:59:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:45.333 03:59:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:45.333 03:59:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:45.333 03:59:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:45.333 03:59:46 -- scripts/common.sh@335 -- # IFS=.-: 00:03:45.333 03:59:46 -- scripts/common.sh@335 -- # read -ra ver1 00:03:45.333 03:59:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.333 03:59:46 -- scripts/common.sh@336 -- # read -ra ver2 00:03:45.333 03:59:46 -- scripts/common.sh@337 -- # local 'op=<' 00:03:45.333 03:59:46 -- scripts/common.sh@339 -- # ver1_l=2 00:03:45.333 03:59:46 -- scripts/common.sh@340 -- # ver2_l=1 00:03:45.333 03:59:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:45.333 03:59:46 -- scripts/common.sh@343 -- # case "$op" in 00:03:45.333 03:59:46 -- scripts/common.sh@344 -- # : 1 00:03:45.333 03:59:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:45.333 03:59:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.333 03:59:46 -- scripts/common.sh@364 -- # decimal 1 00:03:45.333 03:59:46 -- scripts/common.sh@352 -- # local d=1 00:03:45.333 03:59:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.333 03:59:46 -- scripts/common.sh@354 -- # echo 1 00:03:45.333 03:59:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:45.333 03:59:46 -- scripts/common.sh@365 -- # decimal 2 00:03:45.333 03:59:46 -- scripts/common.sh@352 -- # local d=2 00:03:45.334 03:59:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.334 03:59:46 -- scripts/common.sh@354 -- # echo 2 00:03:45.334 03:59:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:45.334 03:59:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:45.334 03:59:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:45.334 03:59:46 -- scripts/common.sh@367 -- # return 0 00:03:45.334 03:59:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.334 03:59:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:45.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.334 --rc genhtml_branch_coverage=1 00:03:45.334 --rc genhtml_function_coverage=1 00:03:45.334 --rc genhtml_legend=1 00:03:45.334 --rc geninfo_all_blocks=1 00:03:45.334 --rc geninfo_unexecuted_blocks=1 00:03:45.334 00:03:45.334 ' 00:03:45.334 03:59:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:45.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.334 --rc genhtml_branch_coverage=1 00:03:45.334 --rc genhtml_function_coverage=1 00:03:45.334 --rc genhtml_legend=1 00:03:45.334 --rc geninfo_all_blocks=1 00:03:45.334 --rc geninfo_unexecuted_blocks=1 00:03:45.334 00:03:45.334 ' 00:03:45.334 03:59:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:45.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.334 --rc genhtml_branch_coverage=1 00:03:45.334 --rc genhtml_function_coverage=1 00:03:45.334 --rc genhtml_legend=1 00:03:45.334 --rc geninfo_all_blocks=1 00:03:45.334 --rc geninfo_unexecuted_blocks=1 00:03:45.334 00:03:45.334 ' 00:03:45.334 03:59:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:45.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.334 --rc genhtml_branch_coverage=1 00:03:45.334 --rc genhtml_function_coverage=1 00:03:45.334 --rc genhtml_legend=1 00:03:45.334 --rc geninfo_all_blocks=1 00:03:45.334 --rc geninfo_unexecuted_blocks=1 00:03:45.334 00:03:45.334 ' 00:03:45.334 03:59:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.334 03:59:46 -- nvmf/common.sh@7 -- # uname -s 00:03:45.334 03:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.334 03:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.334 03:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.334 03:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.334 03:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.334 03:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.334 03:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.334 03:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.334 03:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.334 03:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.334 03:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ce961f53-3d34-4579-a607-64c3d960c355 00:03:45.334 03:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=ce961f53-3d34-4579-a607-64c3d960c355 00:03:45.334 03:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.334 03:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.334 03:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.334 03:59:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.334 03:59:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.334 03:59:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.334 03:59:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.334 03:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.334 03:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.334 03:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.334 03:59:46 -- paths/export.sh@5 -- # export PATH 00:03:45.334 03:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.334 03:59:46 -- nvmf/common.sh@46 -- # : 0 00:03:45.334 03:59:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:45.334 03:59:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:45.334 03:59:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:45.334 03:59:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.334 03:59:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.334 03:59:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:45.334 03:59:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:45.334 03:59:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:45.334 03:59:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.334 03:59:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.334 03:59:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.334 03:59:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.334 03:59:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.334 03:59:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.334 03:59:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.334 03:59:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.334 03:59:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.334 03:59:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.334 03:59:46 -- spdk/autotest.sh@48 -- # udevadm_pid=60571 00:03:45.334 03:59:46 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.334 03:59:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.334 03:59:46 -- spdk/autotest.sh@54 -- # echo 60585 00:03:45.334 03:59:46 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.334 03:59:46 -- spdk/autotest.sh@56 -- # echo 60587 00:03:45.334 03:59:46 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.334 03:59:46 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:45.334 03:59:46 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.334 03:59:46 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:45.334 03:59:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.334 03:59:46 -- common/autotest_common.sh@10 -- # set +x 00:03:45.334 03:59:46 -- spdk/autotest.sh@70 -- # create_test_list 00:03:45.334 03:59:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:45.334 03:59:46 -- common/autotest_common.sh@10 -- # set +x 00:03:45.334 03:59:46 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.334 03:59:46 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.334 03:59:46 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.334 03:59:46 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.334 03:59:46 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.334 03:59:46 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:45.334 03:59:46 -- common/autotest_common.sh@1450 -- # uname 00:03:45.334 03:59:46 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:45.334 03:59:46 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:45.334 03:59:46 -- common/autotest_common.sh@1470 -- # uname 00:03:45.334 03:59:46 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:45.334 03:59:46 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:45.334 03:59:46 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:45.334 lcov: LCOV version 1.15 00:03:45.334 03:59:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.488 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:53.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:53.488 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:53.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:53.488 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:53.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:11.611 04:00:13 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:11.611 04:00:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.611 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.611 04:00:13 -- spdk/autotest.sh@89 -- # rm -f 00:04:11.611 04:00:13 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.178 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:12.178 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:12.436 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:12.436 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:12.436 04:00:13 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:12.436 04:00:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:12.436 04:00:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:12.436 04:00:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:12.436 04:00:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:12.436 04:00:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:12.436 04:00:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:12.437 04:00:13 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:12.437 04:00:13 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 00:04:12.437 04:00:13 -- spdk/autotest.sh@108 -- # grep -v p 00:04:12.437 04:00:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.437 04:00:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.437 04:00:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:12.437 04:00:13 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:12.437 04:00:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.437 No valid GPT data, bailing 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.437 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.437 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.437 1+0 records in 00:04:12.437 1+0 records out 00:04:12.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00301252 s, 348 MB/s 00:04:12.437 04:00:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.437 04:00:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.437 04:00:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n2 00:04:12.437 04:00:14 -- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt 00:04:12.437 04:00:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:12.437 No valid GPT data, bailing 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.437 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.437 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:12.437 1+0 records in 00:04:12.437 1+0 records out 00:04:12.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477132 s, 220 MB/s 00:04:12.437 04:00:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.437 04:00:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.437 04:00:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n3 00:04:12.437 04:00:14 -- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt 00:04:12.437 04:00:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:12.437 No valid GPT data, bailing 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:12.437 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.437 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.437 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:12.437 1+0 records in 00:04:12.437 1+0 records out 00:04:12.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00309894 s, 338 MB/s 00:04:12.437 04:00:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.437 04:00:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.437 04:00:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:12.437 04:00:14 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:12.437 04:00:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.695 No valid GPT data, bailing 00:04:12.695 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.695 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.695 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.695 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.695 1+0 records in 00:04:12.695 1+0 records out 00:04:12.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464614 s, 226 MB/s 00:04:12.695 04:00:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.695 04:00:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.695 04:00:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:12.695 04:00:14 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:12.695 04:00:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.695 No valid GPT data, bailing 00:04:12.695 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.695 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.695 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.695 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.695 1+0 records in 00:04:12.695 1+0 records out 00:04:12.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00833197 s, 126 MB/s 00:04:12.695 04:00:14 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.695 04:00:14 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:12.695 04:00:14 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:12.695 04:00:14 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:12.696 04:00:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.696 No valid GPT data, bailing 00:04:12.696 04:00:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.696 04:00:14 -- scripts/common.sh@393 -- # pt= 00:04:12.696 04:00:14 -- scripts/common.sh@394 -- # return 1 00:04:12.696 04:00:14 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.696 1+0 records in 00:04:12.696 1+0 records out 00:04:12.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464035 s, 226 MB/s 00:04:12.696 04:00:14 -- spdk/autotest.sh@116 -- # sync 00:04:13.260 04:00:14 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.260 04:00:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.260 04:00:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.634 04:00:16 -- spdk/autotest.sh@122 -- # uname -s 00:04:14.634 04:00:16 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:14.634 04:00:16 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.634 04:00:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.634 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.634 ************************************ 00:04:14.634 START TEST setup.sh 00:04:14.634 ************************************ 00:04:14.634 04:00:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.634 * Looking for test storage... 00:04:14.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.634 04:00:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.634 04:00:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.634 04:00:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.634 04:00:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.634 04:00:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.634 04:00:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.634 04:00:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.634 04:00:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.634 04:00:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.634 04:00:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.634 04:00:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.634 04:00:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.634 04:00:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.634 04:00:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.634 04:00:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.634 04:00:16 -- scripts/common.sh@344 -- # : 1 00:04:14.634 04:00:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.634 04:00:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.634 04:00:16 -- scripts/common.sh@364 -- # decimal 1 00:04:14.634 04:00:16 -- scripts/common.sh@352 -- # local d=1 00:04:14.634 04:00:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.634 04:00:16 -- scripts/common.sh@354 -- # echo 1 00:04:14.634 04:00:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.634 04:00:16 -- scripts/common.sh@365 -- # decimal 2 00:04:14.634 04:00:16 -- scripts/common.sh@352 -- # local d=2 00:04:14.634 04:00:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.634 04:00:16 -- scripts/common.sh@354 -- # echo 2 00:04:14.634 04:00:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.634 04:00:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.634 04:00:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.634 04:00:16 -- scripts/common.sh@367 -- # return 0 00:04:14.634 04:00:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.634 --rc genhtml_branch_coverage=1 00:04:14.634 --rc genhtml_function_coverage=1 00:04:14.634 --rc genhtml_legend=1 00:04:14.634 --rc geninfo_all_blocks=1 00:04:14.634 --rc geninfo_unexecuted_blocks=1 00:04:14.634 00:04:14.634 ' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.634 --rc genhtml_branch_coverage=1 00:04:14.634 --rc genhtml_function_coverage=1 00:04:14.634 --rc genhtml_legend=1 00:04:14.634 --rc geninfo_all_blocks=1 00:04:14.634 --rc geninfo_unexecuted_blocks=1 00:04:14.634 00:04:14.634 ' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.634 --rc genhtml_branch_coverage=1 00:04:14.634 --rc genhtml_function_coverage=1 00:04:14.634 --rc genhtml_legend=1 00:04:14.634 --rc geninfo_all_blocks=1 00:04:14.634 --rc geninfo_unexecuted_blocks=1 00:04:14.634 00:04:14.634 ' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.634 --rc genhtml_branch_coverage=1 00:04:14.634 --rc genhtml_function_coverage=1 00:04:14.634 --rc genhtml_legend=1 00:04:14.634 --rc geninfo_all_blocks=1 00:04:14.634 --rc geninfo_unexecuted_blocks=1 00:04:14.634 00:04:14.634 ' 00:04:14.634 04:00:16 -- setup/test-setup.sh@10 -- # uname -s 00:04:14.634 04:00:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:14.634 04:00:16 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.634 04:00:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.634 04:00:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.634 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.634 ************************************ 00:04:14.634 START TEST acl 00:04:14.634 ************************************ 00:04:14.634 04:00:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.892 * Looking for test storage... 00:04:14.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.892 04:00:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.892 04:00:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.892 04:00:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.892 04:00:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.892 04:00:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.892 04:00:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.892 04:00:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.892 04:00:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.892 04:00:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.892 04:00:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.892 04:00:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.892 04:00:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.892 04:00:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.892 04:00:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.892 04:00:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.892 04:00:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.892 04:00:16 -- scripts/common.sh@344 -- # : 1 00:04:14.892 04:00:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.892 04:00:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.892 04:00:16 -- scripts/common.sh@364 -- # decimal 1 00:04:14.892 04:00:16 -- scripts/common.sh@352 -- # local d=1 00:04:14.892 04:00:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.892 04:00:16 -- scripts/common.sh@354 -- # echo 1 00:04:14.892 04:00:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.892 04:00:16 -- scripts/common.sh@365 -- # decimal 2 00:04:14.892 04:00:16 -- scripts/common.sh@352 -- # local d=2 00:04:14.892 04:00:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.892 04:00:16 -- scripts/common.sh@354 -- # echo 2 00:04:14.892 04:00:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.892 04:00:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.892 04:00:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.892 04:00:16 -- scripts/common.sh@367 -- # return 0 00:04:14.892 04:00:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.892 04:00:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.892 --rc genhtml_branch_coverage=1 00:04:14.892 --rc genhtml_function_coverage=1 00:04:14.892 --rc genhtml_legend=1 00:04:14.892 --rc geninfo_all_blocks=1 00:04:14.892 --rc geninfo_unexecuted_blocks=1 00:04:14.892 00:04:14.892 ' 00:04:14.892 04:00:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.892 --rc genhtml_branch_coverage=1 00:04:14.892 --rc genhtml_function_coverage=1 00:04:14.892 --rc genhtml_legend=1 00:04:14.892 --rc geninfo_all_blocks=1 00:04:14.892 --rc geninfo_unexecuted_blocks=1 00:04:14.892 00:04:14.892 ' 00:04:14.892 04:00:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.893 --rc genhtml_branch_coverage=1 00:04:14.893 --rc genhtml_function_coverage=1 00:04:14.893 --rc genhtml_legend=1 00:04:14.893 --rc geninfo_all_blocks=1 00:04:14.893 --rc geninfo_unexecuted_blocks=1 00:04:14.893 00:04:14.893 ' 00:04:14.893 04:00:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.893 --rc genhtml_branch_coverage=1 00:04:14.893 --rc genhtml_function_coverage=1 00:04:14.893 --rc genhtml_legend=1 00:04:14.893 --rc geninfo_all_blocks=1 00:04:14.893 --rc geninfo_unexecuted_blocks=1 00:04:14.893 00:04:14.893 ' 00:04:14.893 04:00:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:14.893 04:00:16 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:14.893 04:00:16 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:14.893 04:00:16 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n2 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n2 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n3 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n3 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:14.893 04:00:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:14.893 04:00:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:14.893 04:00:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:14.893 04:00:16 -- setup/acl.sh@12 -- # devs=() 00:04:14.893 04:00:16 -- setup/acl.sh@12 -- # declare -a devs 00:04:14.893 04:00:16 -- setup/acl.sh@13 -- # drivers=() 00:04:14.893 04:00:16 -- setup/acl.sh@13 -- # declare -A drivers 00:04:14.893 04:00:16 -- setup/acl.sh@51 -- # setup reset 00:04:14.893 04:00:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.893 04:00:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.828 04:00:17 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:15.828 04:00:17 -- setup/acl.sh@16 -- # local dev driver 00:04:15.828 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.828 04:00:17 -- setup/acl.sh@15 -- # setup output status 00:04:15.828 04:00:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.828 04:00:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.828 Hugepages 00:04:15.828 node hugesize free / total 00:04:15.828 04:00:17 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.828 04:00:17 -- setup/acl.sh@19 -- # continue 00:04:15.828 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.828 00:04:15.828 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.828 04:00:17 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.828 04:00:17 -- setup/acl.sh@19 -- # continue 00:04:15.828 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.086 04:00:17 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:16.086 04:00:17 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:16.086 04:00:17 -- setup/acl.sh@20 -- # continue 00:04:16.087 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.087 04:00:17 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.087 04:00:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.087 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.087 04:00:17 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.087 04:00:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.087 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.087 04:00:17 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.087 04:00:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.087 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.087 04:00:17 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.087 04:00:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.087 04:00:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.087 04:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.087 04:00:17 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:16.087 04:00:17 -- setup/acl.sh@54 -- # run_test denied denied 00:04:16.087 04:00:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.087 04:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.087 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.087 ************************************ 00:04:16.087 START TEST denied 00:04:16.087 ************************************ 00:04:16.087 04:00:17 -- common/autotest_common.sh@1114 -- # denied 00:04:16.087 04:00:17 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:16.087 04:00:17 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:16.087 04:00:17 -- setup/acl.sh@38 -- # setup output config 00:04:16.087 04:00:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.087 04:00:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.026 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:17.026 04:00:18 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:17.026 04:00:18 -- setup/acl.sh@28 -- # local dev driver 00:04:17.026 04:00:18 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:17.026 04:00:18 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:17.026 04:00:18 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:17.026 04:00:18 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:17.026 04:00:18 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:17.026 04:00:18 -- setup/acl.sh@41 -- # setup reset 00:04:17.026 04:00:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.026 04:00:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.638 00:04:23.638 real 0m6.680s 00:04:23.638 user 0m0.632s 00:04:23.638 sys 0m1.071s 00:04:23.638 04:00:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.638 04:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:23.638 ************************************ 00:04:23.638 END TEST denied 00:04:23.638 ************************************ 00:04:23.638 04:00:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:23.638 04:00:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.638 04:00:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.638 04:00:24 -- common/autotest_common.sh@10 -- # set +x 00:04:23.638 ************************************ 00:04:23.638 START TEST allowed 00:04:23.638 ************************************ 00:04:23.638 04:00:24 -- common/autotest_common.sh@1114 -- # allowed 00:04:23.638 04:00:24 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:23.638 04:00:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:23.638 04:00:24 -- setup/acl.sh@45 -- # setup output config 00:04:23.638 04:00:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.638 04:00:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.896 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.897 04:00:25 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:23.897 04:00:25 -- setup/acl.sh@28 -- # local dev driver 00:04:23.897 04:00:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.897 04:00:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.897 04:00:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.897 04:00:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.897 04:00:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.897 04:00:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.897 04:00:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:23.897 04:00:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:23.897 04:00:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:23.897 04:00:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:23.897 04:00:25 -- setup/acl.sh@48 -- # setup reset 00:04:23.897 04:00:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.897 04:00:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.831 00:04:24.831 real 0m1.761s 00:04:24.831 user 0m0.771s 00:04:24.831 sys 0m0.956s 00:04:24.831 04:00:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.831 ************************************ 00:04:24.831 END TEST allowed 00:04:24.831 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.831 ************************************ 00:04:24.831 00:04:24.831 real 0m9.966s 00:04:24.831 user 0m2.063s 00:04:24.831 sys 0m2.902s 00:04:24.831 04:00:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.831 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.831 ************************************ 00:04:24.831 END TEST acl 00:04:24.831 ************************************ 00:04:24.831 04:00:26 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:24.831 04:00:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.831 04:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.831 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.831 ************************************ 00:04:24.831 START TEST hugepages 00:04:24.831 ************************************ 00:04:24.831 04:00:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:24.831 * Looking for test storage... 00:04:24.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.831 04:00:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.831 04:00:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.831 04:00:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:24.831 04:00:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:24.831 04:00:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:24.831 04:00:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:24.831 04:00:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:24.831 04:00:26 -- scripts/common.sh@335 -- # IFS=.-: 00:04:24.831 04:00:26 -- scripts/common.sh@335 -- # read -ra ver1 00:04:24.831 04:00:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.831 04:00:26 -- scripts/common.sh@336 -- # read -ra ver2 00:04:24.831 04:00:26 -- scripts/common.sh@337 -- # local 'op=<' 00:04:24.831 04:00:26 -- scripts/common.sh@339 -- # ver1_l=2 00:04:24.831 04:00:26 -- scripts/common.sh@340 -- # ver2_l=1 00:04:24.831 04:00:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:24.831 04:00:26 -- scripts/common.sh@343 -- # case "$op" in 00:04:24.831 04:00:26 -- scripts/common.sh@344 -- # : 1 00:04:24.832 04:00:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:24.832 04:00:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.832 04:00:26 -- scripts/common.sh@364 -- # decimal 1 00:04:24.832 04:00:26 -- scripts/common.sh@352 -- # local d=1 00:04:24.832 04:00:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.832 04:00:26 -- scripts/common.sh@354 -- # echo 1 00:04:24.832 04:00:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:24.832 04:00:26 -- scripts/common.sh@365 -- # decimal 2 00:04:24.832 04:00:26 -- scripts/common.sh@352 -- # local d=2 00:04:24.832 04:00:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.832 04:00:26 -- scripts/common.sh@354 -- # echo 2 00:04:24.832 04:00:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:24.832 04:00:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:24.832 04:00:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:24.832 04:00:26 -- scripts/common.sh@367 -- # return 0 00:04:24.832 04:00:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.832 04:00:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.832 --rc genhtml_branch_coverage=1 00:04:24.832 --rc genhtml_function_coverage=1 00:04:24.832 --rc genhtml_legend=1 00:04:24.832 --rc geninfo_all_blocks=1 00:04:24.832 --rc geninfo_unexecuted_blocks=1 00:04:24.832 00:04:24.832 ' 00:04:24.832 04:00:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.832 --rc genhtml_branch_coverage=1 00:04:24.832 --rc genhtml_function_coverage=1 00:04:24.832 --rc genhtml_legend=1 00:04:24.832 --rc geninfo_all_blocks=1 00:04:24.832 --rc geninfo_unexecuted_blocks=1 00:04:24.832 00:04:24.832 ' 00:04:24.832 04:00:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.832 --rc genhtml_branch_coverage=1 00:04:24.832 --rc genhtml_function_coverage=1 00:04:24.832 --rc genhtml_legend=1 00:04:24.832 --rc geninfo_all_blocks=1 00:04:24.832 --rc geninfo_unexecuted_blocks=1 00:04:24.832 00:04:24.832 ' 00:04:24.832 04:00:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:24.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.832 --rc genhtml_branch_coverage=1 00:04:24.832 --rc genhtml_function_coverage=1 00:04:24.832 --rc genhtml_legend=1 00:04:24.832 --rc geninfo_all_blocks=1 00:04:24.832 --rc geninfo_unexecuted_blocks=1 00:04:24.832 00:04:24.832 ' 00:04:24.832 04:00:26 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:24.832 04:00:26 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:24.832 04:00:26 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:24.832 04:00:26 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:24.832 04:00:26 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:24.832 04:00:26 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:24.832 04:00:26 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:24.832 04:00:26 -- setup/common.sh@18 -- # local node= 00:04:24.832 04:00:26 -- setup/common.sh@19 -- # local var val 00:04:24.832 04:00:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.832 04:00:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.832 04:00:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.832 04:00:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.832 04:00:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.832 04:00:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 4354040 kB' 'MemAvailable: 7322756 kB' 'Buffers: 3704 kB' 'Cached: 3170384 kB' 'SwapCached: 0 kB' 'Active: 465032 kB' 'Inactive: 2824348 kB' 'Active(anon): 125828 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 116988 kB' 'Mapped: 50964 kB' 'Shmem: 10536 kB' 'KReclaimable: 84628 kB' 'Slab: 190888 kB' 'SReclaimable: 84628 kB' 'SUnreclaim: 106260 kB' 'KernelStack: 6768 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12409996 kB' 'Committed_AS: 311096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.832 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.832 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.833 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.833 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # continue 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.834 04:00:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.834 04:00:26 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.834 04:00:26 -- setup/common.sh@33 -- # echo 2048 00:04:24.834 04:00:26 -- setup/common.sh@33 -- # return 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:24.834 04:00:26 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:24.834 04:00:26 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:24.834 04:00:26 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:24.834 04:00:26 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:24.834 04:00:26 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:24.834 04:00:26 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:24.834 04:00:26 -- setup/hugepages.sh@207 -- # get_nodes 00:04:24.834 04:00:26 -- setup/hugepages.sh@27 -- # local node 00:04:24.834 04:00:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.834 04:00:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:24.834 04:00:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.834 04:00:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.834 04:00:26 -- setup/hugepages.sh@208 -- # clear_hp 00:04:24.834 04:00:26 -- setup/hugepages.sh@37 -- # local node hp 00:04:24.834 04:00:26 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.834 04:00:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.834 04:00:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.834 04:00:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.834 04:00:26 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.834 04:00:26 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:24.834 04:00:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.834 04:00:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.834 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:04:24.834 ************************************ 00:04:24.834 START TEST default_setup 00:04:24.834 ************************************ 00:04:24.834 04:00:26 -- common/autotest_common.sh@1114 -- # default_setup 00:04:24.834 04:00:26 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.834 04:00:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.834 04:00:26 -- setup/hugepages.sh@51 -- # shift 00:04:24.834 04:00:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.834 04:00:26 -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.834 04:00:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.834 04:00:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.834 04:00:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.834 04:00:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.834 04:00:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.834 04:00:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.834 04:00:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.834 04:00:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.834 04:00:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.834 04:00:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.834 04:00:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.834 04:00:26 -- setup/hugepages.sh@73 -- # return 0 00:04:24.834 04:00:26 -- setup/hugepages.sh@137 -- # setup output 00:04:24.834 04:00:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.834 04:00:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.769 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.769 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.769 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.032 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.032 04:00:27 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.032 04:00:27 -- setup/hugepages.sh@89 -- # local node 00:04:26.032 04:00:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.032 04:00:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.032 04:00:27 -- setup/hugepages.sh@92 -- # local surp 00:04:26.032 04:00:27 -- setup/hugepages.sh@93 -- # local resv 00:04:26.032 04:00:27 -- setup/hugepages.sh@94 -- # local anon 00:04:26.032 04:00:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.032 04:00:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.032 04:00:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.032 04:00:27 -- setup/common.sh@18 -- # local node= 00:04:26.032 04:00:27 -- setup/common.sh@19 -- # local var val 00:04:26.032 04:00:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.032 04:00:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.032 04:00:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.032 04:00:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.032 04:00:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.032 04:00:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.032 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.032 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6441472 kB' 'MemAvailable: 9409948 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467144 kB' 'Inactive: 2824368 kB' 'Active(anon): 127940 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119020 kB' 'Mapped: 50996 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190500 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106392 kB' 'KernelStack: 6720 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.033 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.033 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.034 04:00:27 -- setup/common.sh@33 -- # echo 0 00:04:26.034 04:00:27 -- setup/common.sh@33 -- # return 0 00:04:26.034 04:00:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.034 04:00:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.034 04:00:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.034 04:00:27 -- setup/common.sh@18 -- # local node= 00:04:26.034 04:00:27 -- setup/common.sh@19 -- # local var val 00:04:26.034 04:00:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.034 04:00:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.034 04:00:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.034 04:00:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.034 04:00:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.034 04:00:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6441472 kB' 'MemAvailable: 9409948 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467064 kB' 'Inactive: 2824368 kB' 'Active(anon): 127860 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118940 kB' 'Mapped: 50944 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190500 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106392 kB' 'KernelStack: 6720 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55784 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.034 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.034 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.035 04:00:27 -- setup/common.sh@33 -- # echo 0 00:04:26.035 04:00:27 -- setup/common.sh@33 -- # return 0 00:04:26.035 04:00:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.035 04:00:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.035 04:00:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.035 04:00:27 -- setup/common.sh@18 -- # local node= 00:04:26.035 04:00:27 -- setup/common.sh@19 -- # local var val 00:04:26.035 04:00:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.035 04:00:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.035 04:00:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.035 04:00:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.035 04:00:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.035 04:00:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6441812 kB' 'MemAvailable: 9410288 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467024 kB' 'Inactive: 2824368 kB' 'Active(anon): 127820 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118900 kB' 'Mapped: 50944 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190500 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106392 kB' 'KernelStack: 6704 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.035 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.035 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.036 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.036 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.036 04:00:27 -- setup/common.sh@33 -- # echo 0 00:04:26.036 04:00:27 -- setup/common.sh@33 -- # return 0 00:04:26.036 04:00:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.036 nr_hugepages=1024 00:04:26.036 04:00:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.036 resv_hugepages=0 00:04:26.036 04:00:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.036 surplus_hugepages=0 00:04:26.036 04:00:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.036 anon_hugepages=0 00:04:26.036 04:00:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.036 04:00:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.036 04:00:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.036 04:00:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.036 04:00:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.036 04:00:27 -- setup/common.sh@18 -- # local node= 00:04:26.036 04:00:27 -- setup/common.sh@19 -- # local var val 00:04:26.036 04:00:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.036 04:00:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.036 04:00:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.036 04:00:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.037 04:00:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.037 04:00:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6442332 kB' 'MemAvailable: 9410808 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466864 kB' 'Inactive: 2824368 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118764 kB' 'Mapped: 50824 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190512 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106404 kB' 'KernelStack: 6688 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55784 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.037 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.037 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.038 04:00:27 -- setup/common.sh@33 -- # echo 1024 00:04:26.038 04:00:27 -- setup/common.sh@33 -- # return 0 00:04:26.038 04:00:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.038 04:00:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.038 04:00:27 -- setup/hugepages.sh@27 -- # local node 00:04:26.038 04:00:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.038 04:00:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.038 04:00:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.038 04:00:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.038 04:00:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.038 04:00:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.038 04:00:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.038 04:00:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.038 04:00:27 -- setup/common.sh@18 -- # local node=0 00:04:26.038 04:00:27 -- setup/common.sh@19 -- # local var val 00:04:26.038 04:00:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.038 04:00:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.038 04:00:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.038 04:00:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.038 04:00:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.038 04:00:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6442392 kB' 'MemUsed: 5794696 kB' 'SwapCached: 0 kB' 'Active: 467052 kB' 'Inactive: 2824376 kB' 'Active(anon): 127848 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174076 kB' 'Mapped: 51084 kB' 'AnonPages: 118704 kB' 'Shmem: 10496 kB' 'KernelStack: 6756 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190512 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.038 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.038 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # continue 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.039 04:00:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.039 04:00:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.039 04:00:27 -- setup/common.sh@33 -- # echo 0 00:04:26.039 04:00:27 -- setup/common.sh@33 -- # return 0 00:04:26.039 04:00:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.039 04:00:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.039 04:00:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.039 04:00:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.039 node0=1024 expecting 1024 00:04:26.039 04:00:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.039 04:00:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.039 00:04:26.039 real 0m1.113s 00:04:26.039 user 0m0.469s 00:04:26.039 sys 0m0.601s 00:04:26.039 04:00:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.039 ************************************ 00:04:26.039 END TEST default_setup 00:04:26.039 04:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.039 ************************************ 00:04:26.039 04:00:27 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.039 04:00:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.039 04:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.039 04:00:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.039 ************************************ 00:04:26.039 START TEST per_node_1G_alloc 00:04:26.039 ************************************ 00:04:26.039 04:00:27 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:26.039 04:00:27 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.039 04:00:27 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:26.039 04:00:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.039 04:00:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.039 04:00:27 -- setup/hugepages.sh@51 -- # shift 00:04:26.039 04:00:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.039 04:00:27 -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.039 04:00:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.039 04:00:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.039 04:00:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.039 04:00:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.039 04:00:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.039 04:00:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.039 04:00:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.039 04:00:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.039 04:00:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.039 04:00:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.039 04:00:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.039 04:00:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.039 04:00:27 -- setup/hugepages.sh@73 -- # return 0 00:04:26.039 04:00:27 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.039 04:00:27 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:26.039 04:00:27 -- setup/hugepages.sh@146 -- # setup output 00:04:26.039 04:00:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.039 04:00:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.612 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.612 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.612 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.612 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.612 04:00:28 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:26.612 04:00:28 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:26.612 04:00:28 -- setup/hugepages.sh@89 -- # local node 00:04:26.612 04:00:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.612 04:00:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.612 04:00:28 -- setup/hugepages.sh@92 -- # local surp 00:04:26.612 04:00:28 -- setup/hugepages.sh@93 -- # local resv 00:04:26.612 04:00:28 -- setup/hugepages.sh@94 -- # local anon 00:04:26.612 04:00:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.612 04:00:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.612 04:00:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.612 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:26.612 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:26.612 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.612 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.613 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.613 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.613 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.613 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7493300 kB' 'MemAvailable: 10461784 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467640 kB' 'Inactive: 2824376 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119288 kB' 'Mapped: 50952 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190592 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106484 kB' 'KernelStack: 6940 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55880 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.613 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.613 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.614 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:26.614 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:26.614 04:00:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.614 04:00:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.614 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.614 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:26.614 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:26.614 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.614 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.614 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.614 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.614 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.614 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.614 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7493300 kB' 'MemAvailable: 10461784 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466880 kB' 'Inactive: 2824376 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118508 kB' 'Mapped: 50884 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190612 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106504 kB' 'KernelStack: 6732 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.614 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.614 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.615 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.615 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.616 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:26.616 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:26.616 04:00:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.616 04:00:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.616 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.616 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:26.616 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:26.616 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.616 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.616 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.616 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.616 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.616 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.616 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7493300 kB' 'MemAvailable: 10461784 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466716 kB' 'Inactive: 2824376 kB' 'Active(anon): 127512 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190692 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106584 kB' 'KernelStack: 6672 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.616 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.616 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.617 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.617 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.618 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:26.618 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:26.618 04:00:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.618 04:00:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:26.618 nr_hugepages=512 00:04:26.618 resv_hugepages=0 00:04:26.618 04:00:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.618 surplus_hugepages=0 00:04:26.618 04:00:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.618 anon_hugepages=0 00:04:26.618 04:00:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.618 04:00:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:26.618 04:00:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:26.618 04:00:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.618 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.618 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:26.618 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:26.618 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.618 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.618 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.618 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.618 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.618 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7499152 kB' 'MemAvailable: 10467636 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466536 kB' 'Inactive: 2824376 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118412 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190700 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106592 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.618 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.618 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.619 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.619 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.620 04:00:28 -- setup/common.sh@33 -- # echo 512 00:04:26.620 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:26.620 04:00:28 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:26.620 04:00:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.620 04:00:28 -- setup/hugepages.sh@27 -- # local node 00:04:26.620 04:00:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.620 04:00:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.620 04:00:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.620 04:00:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.620 04:00:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.620 04:00:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.620 04:00:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.620 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.620 04:00:28 -- setup/common.sh@18 -- # local node=0 00:04:26.620 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:26.620 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.620 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.620 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.620 04:00:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.620 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.620 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7499756 kB' 'MemUsed: 4737332 kB' 'SwapCached: 0 kB' 'Active: 466276 kB' 'Inactive: 2824376 kB' 'Active(anon): 127072 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174076 kB' 'Mapped: 50832 kB' 'AnonPages: 118168 kB' 'Shmem: 10496 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190700 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.620 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.620 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # continue 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.621 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.621 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.621 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:26.621 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:26.621 04:00:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.621 04:00:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.621 04:00:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.621 04:00:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.621 node0=512 expecting 512 00:04:26.621 04:00:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.621 04:00:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.621 00:04:26.621 real 0m0.516s 00:04:26.621 user 0m0.223s 00:04:26.621 sys 0m0.327s 00:04:26.621 04:00:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.621 04:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 ************************************ 00:04:26.621 END TEST per_node_1G_alloc 00:04:26.621 ************************************ 00:04:26.621 04:00:28 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:26.621 04:00:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.621 04:00:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.621 04:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:26.621 ************************************ 00:04:26.621 START TEST even_2G_alloc 00:04:26.621 ************************************ 00:04:26.621 04:00:28 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:26.621 04:00:28 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:26.621 04:00:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.621 04:00:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.622 04:00:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.622 04:00:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.622 04:00:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.622 04:00:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.622 04:00:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.622 04:00:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.622 04:00:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.622 04:00:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:26.622 04:00:28 -- setup/hugepages.sh@83 -- # : 0 00:04:26.622 04:00:28 -- setup/hugepages.sh@84 -- # : 0 00:04:26.622 04:00:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.622 04:00:28 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:26.622 04:00:28 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:26.622 04:00:28 -- setup/hugepages.sh@153 -- # setup output 00:04:26.622 04:00:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.622 04:00:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.141 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.141 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.141 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.141 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.141 04:00:28 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:27.142 04:00:28 -- setup/hugepages.sh@89 -- # local node 00:04:27.142 04:00:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.142 04:00:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.142 04:00:28 -- setup/hugepages.sh@92 -- # local surp 00:04:27.142 04:00:28 -- setup/hugepages.sh@93 -- # local resv 00:04:27.142 04:00:28 -- setup/hugepages.sh@94 -- # local anon 00:04:27.142 04:00:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.142 04:00:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.142 04:00:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.142 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:27.142 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:27.142 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.142 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.142 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.142 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.142 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.142 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6465380 kB' 'MemAvailable: 9433864 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467152 kB' 'Inactive: 2824376 kB' 'Active(anon): 127948 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119056 kB' 'Mapped: 51272 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190644 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106536 kB' 'KernelStack: 6792 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.142 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.142 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.143 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:27.143 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:27.143 04:00:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:27.143 04:00:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.143 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.143 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:27.143 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:27.143 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.143 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.143 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.143 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.143 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.143 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.143 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6465380 kB' 'MemAvailable: 9433864 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466688 kB' 'Inactive: 2824376 kB' 'Active(anon): 127484 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118588 kB' 'Mapped: 51012 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190644 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106536 kB' 'KernelStack: 6728 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55784 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.143 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.143 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.144 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:27.144 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:27.144 04:00:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.144 04:00:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.144 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.144 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:27.144 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:27.144 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.144 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.144 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.144 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.144 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.144 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6465128 kB' 'MemAvailable: 9433612 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466436 kB' 'Inactive: 2824376 kB' 'Active(anon): 127232 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118316 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190620 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106512 kB' 'KernelStack: 6720 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55784 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.144 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.144 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.145 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.145 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.146 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:27.146 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:27.146 nr_hugepages=1024 00:04:27.146 04:00:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.146 04:00:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.146 resv_hugepages=0 00:04:27.146 04:00:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.146 surplus_hugepages=0 00:04:27.146 anon_hugepages=0 00:04:27.146 04:00:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.146 04:00:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.146 04:00:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.146 04:00:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.146 04:00:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.146 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.146 04:00:28 -- setup/common.sh@18 -- # local node= 00:04:27.146 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:27.146 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.146 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.146 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.146 04:00:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.146 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.146 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6465128 kB' 'MemAvailable: 9433612 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466520 kB' 'Inactive: 2824376 kB' 'Active(anon): 127316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118444 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190576 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106468 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55784 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.146 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.146 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.147 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.147 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.147 04:00:28 -- setup/common.sh@33 -- # echo 1024 00:04:27.147 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:27.147 04:00:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.147 04:00:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.147 04:00:28 -- setup/hugepages.sh@27 -- # local node 00:04:27.147 04:00:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.147 04:00:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.147 04:00:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.147 04:00:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.147 04:00:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.147 04:00:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.147 04:00:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.147 04:00:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.147 04:00:28 -- setup/common.sh@18 -- # local node=0 00:04:27.147 04:00:28 -- setup/common.sh@19 -- # local var val 00:04:27.147 04:00:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.147 04:00:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.147 04:00:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.147 04:00:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.147 04:00:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.147 04:00:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.148 04:00:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6465128 kB' 'MemUsed: 5771960 kB' 'SwapCached: 0 kB' 'Active: 466428 kB' 'Inactive: 2824376 kB' 'Active(anon): 127224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174076 kB' 'Mapped: 50832 kB' 'AnonPages: 118340 kB' 'Shmem: 10496 kB' 'KernelStack: 6720 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190576 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # continue 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.148 04:00:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.148 04:00:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.148 04:00:28 -- setup/common.sh@33 -- # echo 0 00:04:27.148 04:00:28 -- setup/common.sh@33 -- # return 0 00:04:27.148 04:00:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.148 04:00:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.148 node0=1024 expecting 1024 00:04:27.148 04:00:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.148 04:00:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.148 04:00:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.148 04:00:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.148 00:04:27.148 real 0m0.539s 00:04:27.148 user 0m0.251s 00:04:27.148 sys 0m0.305s 00:04:27.148 ************************************ 00:04:27.148 END TEST even_2G_alloc 00:04:27.148 ************************************ 00:04:27.148 04:00:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.148 04:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.149 04:00:28 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:27.149 04:00:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.149 04:00:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.149 04:00:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.149 ************************************ 00:04:27.149 START TEST odd_alloc 00:04:27.149 ************************************ 00:04:27.149 04:00:28 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:27.149 04:00:28 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:27.149 04:00:28 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:27.149 04:00:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:27.149 04:00:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.149 04:00:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.149 04:00:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.149 04:00:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:27.149 04:00:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.149 04:00:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.149 04:00:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.149 04:00:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:27.149 04:00:28 -- setup/hugepages.sh@83 -- # : 0 00:04:27.149 04:00:28 -- setup/hugepages.sh@84 -- # : 0 00:04:27.149 04:00:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.149 04:00:28 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:27.149 04:00:28 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:27.149 04:00:28 -- setup/hugepages.sh@160 -- # setup output 00:04:27.149 04:00:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.149 04:00:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.718 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.718 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.718 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.718 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.718 04:00:29 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.718 04:00:29 -- setup/hugepages.sh@89 -- # local node 00:04:27.718 04:00:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.718 04:00:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.718 04:00:29 -- setup/hugepages.sh@92 -- # local surp 00:04:27.718 04:00:29 -- setup/hugepages.sh@93 -- # local resv 00:04:27.718 04:00:29 -- setup/hugepages.sh@94 -- # local anon 00:04:27.718 04:00:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.718 04:00:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.718 04:00:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.718 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:27.718 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:27.718 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.718 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.718 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.718 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.718 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.718 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6472452 kB' 'MemAvailable: 9440936 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466952 kB' 'Inactive: 2824376 kB' 'Active(anon): 127748 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118820 kB' 'Mapped: 50892 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190352 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106244 kB' 'KernelStack: 6732 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.718 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.718 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.719 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:27.719 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:27.719 04:00:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:27.719 04:00:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.719 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.719 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:27.719 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:27.719 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.719 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.719 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.719 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.719 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.719 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6472452 kB' 'MemAvailable: 9440936 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466528 kB' 'Inactive: 2824376 kB' 'Active(anon): 127324 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118396 kB' 'Mapped: 50992 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190364 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106256 kB' 'KernelStack: 6716 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.719 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.719 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.720 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.720 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.721 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:27.721 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:27.721 04:00:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.721 04:00:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.721 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.721 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:27.721 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:27.721 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.721 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.721 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.721 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.721 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.721 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6472884 kB' 'MemAvailable: 9441368 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466540 kB' 'Inactive: 2824376 kB' 'Active(anon): 127336 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118460 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190336 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106228 kB' 'KernelStack: 6736 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.721 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.721 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.722 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:27.722 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:27.722 04:00:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.722 04:00:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.722 nr_hugepages=1025 00:04:27.722 resv_hugepages=0 00:04:27.722 surplus_hugepages=0 00:04:27.722 anon_hugepages=0 00:04:27.722 04:00:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.722 04:00:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.722 04:00:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.722 04:00:29 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.722 04:00:29 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.722 04:00:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.722 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.722 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:27.722 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:27.722 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.722 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.722 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.722 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.722 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.722 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6472884 kB' 'MemAvailable: 9441368 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466508 kB' 'Inactive: 2824376 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118428 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190336 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106228 kB' 'KernelStack: 6720 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.722 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.722 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.723 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.723 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.724 04:00:29 -- setup/common.sh@33 -- # echo 1025 00:04:27.724 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:27.724 04:00:29 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.724 04:00:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.724 04:00:29 -- setup/hugepages.sh@27 -- # local node 00:04:27.724 04:00:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.724 04:00:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:27.724 04:00:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.724 04:00:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.724 04:00:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.724 04:00:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.724 04:00:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.724 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.724 04:00:29 -- setup/common.sh@18 -- # local node=0 00:04:27.724 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:27.724 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.724 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.724 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.724 04:00:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.724 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.724 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.724 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6472632 kB' 'MemUsed: 5764456 kB' 'SwapCached: 0 kB' 'Active: 466512 kB' 'Inactive: 2824376 kB' 'Active(anon): 127308 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174076 kB' 'Mapped: 50832 kB' 'AnonPages: 118360 kB' 'Shmem: 10496 kB' 'KernelStack: 6720 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190328 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.724 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.724 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # continue 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.725 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.725 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.725 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:27.725 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:27.725 04:00:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.725 node0=1025 expecting 1025 00:04:27.725 04:00:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.725 04:00:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.725 04:00:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.725 04:00:29 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:27.725 04:00:29 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:27.725 00:04:27.725 real 0m0.562s 00:04:27.725 user 0m0.256s 00:04:27.725 sys 0m0.319s 00:04:27.725 04:00:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.725 ************************************ 00:04:27.725 END TEST odd_alloc 00:04:27.725 ************************************ 00:04:27.725 04:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:27.725 04:00:29 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:27.725 04:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.725 04:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.725 04:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:27.725 ************************************ 00:04:27.725 START TEST custom_alloc 00:04:27.725 ************************************ 00:04:27.725 04:00:29 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:27.725 04:00:29 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:27.725 04:00:29 -- setup/hugepages.sh@169 -- # local node 00:04:27.725 04:00:29 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:27.725 04:00:29 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:27.725 04:00:29 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:27.725 04:00:29 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:27.725 04:00:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:27.725 04:00:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.725 04:00:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.725 04:00:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.725 04:00:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.725 04:00:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.725 04:00:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.725 04:00:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@83 -- # : 0 00:04:27.725 04:00:29 -- setup/hugepages.sh@84 -- # : 0 00:04:27.725 04:00:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.725 04:00:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.725 04:00:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:27.725 04:00:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.725 04:00:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.725 04:00:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.725 04:00:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.725 04:00:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.725 04:00:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:27.725 04:00:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.725 04:00:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.725 04:00:29 -- setup/hugepages.sh@78 -- # return 0 00:04:27.725 04:00:29 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:27.725 04:00:29 -- setup/hugepages.sh@187 -- # setup output 00:04:27.725 04:00:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.725 04:00:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.295 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.295 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.295 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.295 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.295 04:00:29 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:28.295 04:00:29 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:28.295 04:00:29 -- setup/hugepages.sh@89 -- # local node 00:04:28.295 04:00:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.295 04:00:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.295 04:00:29 -- setup/hugepages.sh@92 -- # local surp 00:04:28.295 04:00:29 -- setup/hugepages.sh@93 -- # local resv 00:04:28.295 04:00:29 -- setup/hugepages.sh@94 -- # local anon 00:04:28.295 04:00:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.295 04:00:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.295 04:00:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.295 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:28.295 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:28.295 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.295 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.295 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.295 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.295 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.295 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.295 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.295 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7523732 kB' 'MemAvailable: 10492216 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 467028 kB' 'Inactive: 2824376 kB' 'Active(anon): 127824 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119172 kB' 'Mapped: 50852 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190472 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106364 kB' 'KernelStack: 6728 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55848 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.296 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.296 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.297 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:28.297 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:28.297 04:00:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.297 04:00:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.297 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.297 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:28.297 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:28.297 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.297 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.297 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.297 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.297 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.297 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7523732 kB' 'MemAvailable: 10492216 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466648 kB' 'Inactive: 2824376 kB' 'Active(anon): 127444 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118532 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190472 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106364 kB' 'KernelStack: 6728 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55816 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.297 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.297 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.298 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:28.298 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:28.298 04:00:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.298 04:00:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.298 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.298 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:28.298 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:28.298 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.298 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.298 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.298 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.298 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.298 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7523984 kB' 'MemAvailable: 10492468 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466656 kB' 'Inactive: 2824376 kB' 'Active(anon): 127452 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118524 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190540 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106432 kB' 'KernelStack: 6752 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 313348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.298 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.298 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.299 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.299 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.299 04:00:29 -- setup/common.sh@33 -- # echo 0 00:04:28.299 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:28.299 nr_hugepages=512 00:04:28.299 resv_hugepages=0 00:04:28.299 04:00:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.299 04:00:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:28.299 04:00:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.299 surplus_hugepages=0 00:04:28.299 04:00:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.299 anon_hugepages=0 00:04:28.299 04:00:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.299 04:00:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:28.299 04:00:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:28.299 04:00:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.299 04:00:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.299 04:00:29 -- setup/common.sh@18 -- # local node= 00:04:28.299 04:00:29 -- setup/common.sh@19 -- # local var val 00:04:28.299 04:00:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.300 04:00:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.300 04:00:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.300 04:00:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.300 04:00:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.300 04:00:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7523984 kB' 'MemAvailable: 10492468 kB' 'Buffers: 3704 kB' 'Cached: 3170372 kB' 'SwapCached: 0 kB' 'Active: 466756 kB' 'Inactive: 2824376 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190540 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106432 kB' 'KernelStack: 6720 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.300 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.300 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.301 04:00:29 -- setup/common.sh@33 -- # echo 512 00:04:28.301 04:00:29 -- setup/common.sh@33 -- # return 0 00:04:28.301 04:00:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:28.301 04:00:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.301 04:00:29 -- setup/hugepages.sh@27 -- # local node 00:04:28.301 04:00:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.301 04:00:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.301 04:00:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.301 04:00:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.301 04:00:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.301 04:00:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.301 04:00:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.301 04:00:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.301 04:00:30 -- setup/common.sh@18 -- # local node=0 00:04:28.301 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:28.301 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.301 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.301 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.301 04:00:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.301 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.301 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 7523984 kB' 'MemUsed: 4713104 kB' 'SwapCached: 0 kB' 'Active: 466380 kB' 'Inactive: 2824380 kB' 'Active(anon): 127176 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174080 kB' 'Mapped: 50888 kB' 'AnonPages: 118268 kB' 'Shmem: 10496 kB' 'KernelStack: 6736 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190528 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.301 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.301 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.302 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.302 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.302 04:00:30 -- setup/common.sh@33 -- # echo 0 00:04:28.302 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:28.302 04:00:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.302 04:00:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.302 04:00:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.302 04:00:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.302 node0=512 expecting 512 00:04:28.302 04:00:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:28.302 04:00:30 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:28.302 00:04:28.302 real 0m0.572s 00:04:28.302 user 0m0.227s 00:04:28.302 sys 0m0.365s 00:04:28.302 04:00:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.302 ************************************ 00:04:28.302 END TEST custom_alloc 00:04:28.302 ************************************ 00:04:28.302 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:04:28.302 04:00:30 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:28.302 04:00:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.302 04:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.302 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:04:28.302 ************************************ 00:04:28.302 START TEST no_shrink_alloc 00:04:28.302 ************************************ 00:04:28.302 04:00:30 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:28.302 04:00:30 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:28.302 04:00:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.302 04:00:30 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.302 04:00:30 -- setup/hugepages.sh@51 -- # shift 00:04:28.302 04:00:30 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.302 04:00:30 -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.302 04:00:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.302 04:00:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.302 04:00:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.302 04:00:30 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.302 04:00:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.302 04:00:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.302 04:00:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.302 04:00:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.302 04:00:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.302 04:00:30 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.302 04:00:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.302 04:00:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.302 04:00:30 -- setup/hugepages.sh@73 -- # return 0 00:04:28.302 04:00:30 -- setup/hugepages.sh@198 -- # setup output 00:04:28.561 04:00:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.561 04:00:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.820 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.820 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.820 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.820 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.820 04:00:30 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:28.820 04:00:30 -- setup/hugepages.sh@89 -- # local node 00:04:28.820 04:00:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.820 04:00:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.820 04:00:30 -- setup/hugepages.sh@92 -- # local surp 00:04:28.820 04:00:30 -- setup/hugepages.sh@93 -- # local resv 00:04:28.820 04:00:30 -- setup/hugepages.sh@94 -- # local anon 00:04:28.820 04:00:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.820 04:00:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.820 04:00:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.820 04:00:30 -- setup/common.sh@18 -- # local node= 00:04:28.820 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:28.820 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.820 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.820 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.820 04:00:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.820 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.820 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6489060 kB' 'MemAvailable: 9457548 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 467300 kB' 'Inactive: 2824380 kB' 'Active(anon): 128096 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 50900 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190372 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106264 kB' 'KernelStack: 6828 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.820 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.820 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.821 04:00:30 -- setup/common.sh@33 -- # echo 0 00:04:28.821 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:28.821 04:00:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.821 04:00:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.821 04:00:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.821 04:00:30 -- setup/common.sh@18 -- # local node= 00:04:28.821 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:28.821 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.821 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.821 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.821 04:00:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.821 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.821 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6489060 kB' 'MemAvailable: 9457548 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 466528 kB' 'Inactive: 2824380 kB' 'Active(anon): 127324 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118396 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190376 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106268 kB' 'KernelStack: 6720 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.821 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.821 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.822 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.822 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.823 04:00:30 -- setup/common.sh@33 -- # echo 0 00:04:28.823 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:28.823 04:00:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.823 04:00:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.823 04:00:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.823 04:00:30 -- setup/common.sh@18 -- # local node= 00:04:28.823 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:28.823 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.823 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.823 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.823 04:00:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.823 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.823 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6489060 kB' 'MemAvailable: 9457548 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 466592 kB' 'Inactive: 2824380 kB' 'Active(anon): 127388 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118516 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190376 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106268 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.823 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.823 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.824 04:00:30 -- setup/common.sh@33 -- # echo 0 00:04:28.824 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:28.824 nr_hugepages=1024 00:04:28.824 resv_hugepages=0 00:04:28.824 04:00:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.824 04:00:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.824 04:00:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.824 surplus_hugepages=0 00:04:28.824 anon_hugepages=0 00:04:28.824 04:00:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.824 04:00:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.824 04:00:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.824 04:00:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.824 04:00:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.824 04:00:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.824 04:00:30 -- setup/common.sh@18 -- # local node= 00:04:28.824 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:28.824 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.824 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.824 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.824 04:00:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.824 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.824 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.824 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.824 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6489060 kB' 'MemAvailable: 9457548 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 466632 kB' 'Inactive: 2824380 kB' 'Active(anon): 127428 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118520 kB' 'Mapped: 50832 kB' 'Shmem: 10496 kB' 'KReclaimable: 84108 kB' 'Slab: 190372 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106264 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 313556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55800 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.824 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.083 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.083 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.083 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.083 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.084 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.084 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.085 04:00:30 -- setup/common.sh@33 -- # echo 1024 00:04:29.085 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:29.085 04:00:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.085 04:00:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.085 04:00:30 -- setup/hugepages.sh@27 -- # local node 00:04:29.085 04:00:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.085 04:00:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.085 04:00:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.085 04:00:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.085 04:00:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.085 04:00:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.085 04:00:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.085 04:00:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.085 04:00:30 -- setup/common.sh@18 -- # local node=0 00:04:29.085 04:00:30 -- setup/common.sh@19 -- # local var val 00:04:29.085 04:00:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.085 04:00:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.085 04:00:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.085 04:00:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.085 04:00:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.085 04:00:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6489060 kB' 'MemUsed: 5748028 kB' 'SwapCached: 0 kB' 'Active: 466564 kB' 'Inactive: 2824380 kB' 'Active(anon): 127360 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 3174080 kB' 'Mapped: 50832 kB' 'AnonPages: 118472 kB' 'Shmem: 10496 kB' 'KernelStack: 6736 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84108 kB' 'Slab: 190372 kB' 'SReclaimable: 84108 kB' 'SUnreclaim: 106264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.085 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.085 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # continue 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.086 04:00:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.086 04:00:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.086 04:00:30 -- setup/common.sh@33 -- # echo 0 00:04:29.086 04:00:30 -- setup/common.sh@33 -- # return 0 00:04:29.086 04:00:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.086 node0=1024 expecting 1024 00:04:29.086 04:00:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.086 04:00:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.086 04:00:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.086 04:00:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.086 04:00:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.086 04:00:30 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:29.086 04:00:30 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:29.086 04:00:30 -- setup/hugepages.sh@202 -- # setup output 00:04:29.086 04:00:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.086 04:00:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.345 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.345 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.345 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.345 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.345 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.345 04:00:31 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:29.345 04:00:31 -- setup/hugepages.sh@89 -- # local node 00:04:29.345 04:00:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.345 04:00:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.345 04:00:31 -- setup/hugepages.sh@92 -- # local surp 00:04:29.345 04:00:31 -- setup/hugepages.sh@93 -- # local resv 00:04:29.345 04:00:31 -- setup/hugepages.sh@94 -- # local anon 00:04:29.345 04:00:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.345 04:00:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.345 04:00:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.345 04:00:31 -- setup/common.sh@18 -- # local node= 00:04:29.345 04:00:31 -- setup/common.sh@19 -- # local var val 00:04:29.345 04:00:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.345 04:00:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.345 04:00:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.345 04:00:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.345 04:00:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.345 04:00:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6490176 kB' 'MemAvailable: 9458660 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 466388 kB' 'Inactive: 2824380 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118352 kB' 'Mapped: 50112 kB' 'Shmem: 10496 kB' 'KReclaimable: 84100 kB' 'Slab: 190412 kB' 'SReclaimable: 84100 kB' 'SUnreclaim: 106312 kB' 'KernelStack: 6876 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 305288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55832 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.345 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.345 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.346 04:00:31 -- setup/common.sh@33 -- # echo 0 00:04:29.346 04:00:31 -- setup/common.sh@33 -- # return 0 00:04:29.346 04:00:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:29.346 04:00:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.346 04:00:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.346 04:00:31 -- setup/common.sh@18 -- # local node= 00:04:29.346 04:00:31 -- setup/common.sh@19 -- # local var val 00:04:29.346 04:00:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.346 04:00:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.346 04:00:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.346 04:00:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.346 04:00:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.346 04:00:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.346 04:00:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6490880 kB' 'MemAvailable: 9459364 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 465616 kB' 'Inactive: 2824380 kB' 'Active(anon): 126412 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117488 kB' 'Mapped: 49984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84100 kB' 'Slab: 190356 kB' 'SReclaimable: 84100 kB' 'SUnreclaim: 106256 kB' 'KernelStack: 6720 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 305288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55768 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 04:00:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.608 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.608 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.609 04:00:31 -- setup/common.sh@33 -- # echo 0 00:04:29.609 04:00:31 -- setup/common.sh@33 -- # return 0 00:04:29.609 04:00:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:29.609 04:00:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.609 04:00:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.609 04:00:31 -- setup/common.sh@18 -- # local node= 00:04:29.609 04:00:31 -- setup/common.sh@19 -- # local var val 00:04:29.609 04:00:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.609 04:00:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.609 04:00:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.609 04:00:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.609 04:00:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.609 04:00:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6490880 kB' 'MemAvailable: 9459364 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 465500 kB' 'Inactive: 2824380 kB' 'Active(anon): 126296 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117372 kB' 'Mapped: 49984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84100 kB' 'Slab: 190356 kB' 'SReclaimable: 84100 kB' 'SUnreclaim: 106256 kB' 'KernelStack: 6672 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 305288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55752 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.609 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.609 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.610 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.610 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.610 04:00:31 -- setup/common.sh@33 -- # echo 0 00:04:29.610 04:00:31 -- setup/common.sh@33 -- # return 0 00:04:29.610 04:00:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:29.610 04:00:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.610 nr_hugepages=1024 00:04:29.610 04:00:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.610 resv_hugepages=0 00:04:29.611 04:00:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.611 surplus_hugepages=0 00:04:29.611 anon_hugepages=0 00:04:29.611 04:00:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.611 04:00:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.611 04:00:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.611 04:00:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.611 04:00:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.611 04:00:31 -- setup/common.sh@18 -- # local node= 00:04:29.611 04:00:31 -- setup/common.sh@19 -- # local var val 00:04:29.611 04:00:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.611 04:00:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.611 04:00:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.611 04:00:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.611 04:00:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.611 04:00:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6490880 kB' 'MemAvailable: 9459364 kB' 'Buffers: 3704 kB' 'Cached: 3170376 kB' 'SwapCached: 0 kB' 'Active: 465468 kB' 'Inactive: 2824380 kB' 'Active(anon): 126264 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117348 kB' 'Mapped: 49984 kB' 'Shmem: 10496 kB' 'KReclaimable: 84100 kB' 'Slab: 190356 kB' 'SReclaimable: 84100 kB' 'SUnreclaim: 106256 kB' 'KernelStack: 6672 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 305288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55736 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 200556 kB' 'DirectMap2M: 5042176 kB' 'DirectMap1G: 9437184 kB' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.611 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.611 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.612 04:00:31 -- setup/common.sh@33 -- # echo 1024 00:04:29.612 04:00:31 -- setup/common.sh@33 -- # return 0 00:04:29.612 04:00:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.612 04:00:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.612 04:00:31 -- setup/hugepages.sh@27 -- # local node 00:04:29.612 04:00:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.612 04:00:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.612 04:00:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.612 04:00:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.612 04:00:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.612 04:00:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.612 04:00:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.612 04:00:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.612 04:00:31 -- setup/common.sh@18 -- # local node=0 00:04:29.612 04:00:31 -- setup/common.sh@19 -- # local var val 00:04:29.612 04:00:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.612 04:00:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.612 04:00:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.612 04:00:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.612 04:00:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.612 04:00:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237088 kB' 'MemFree: 6490880 kB' 'MemUsed: 5746208 kB' 'SwapCached: 0 kB' 'Active: 465284 kB' 'Inactive: 2824380 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 339204 kB' 'Inactive(file): 2824380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3174080 kB' 'Mapped: 49984 kB' 'AnonPages: 117212 kB' 'Shmem: 10496 kB' 'KernelStack: 6688 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84100 kB' 'Slab: 190356 kB' 'SReclaimable: 84100 kB' 'SUnreclaim: 106256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.612 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.612 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # continue 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.613 04:00:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.613 04:00:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.613 04:00:31 -- setup/common.sh@33 -- # echo 0 00:04:29.613 04:00:31 -- setup/common.sh@33 -- # return 0 00:04:29.613 node0=1024 expecting 1024 00:04:29.613 04:00:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.613 04:00:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.613 04:00:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.613 04:00:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.613 04:00:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.613 04:00:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.613 00:04:29.613 real 0m1.139s 00:04:29.613 user 0m0.487s 00:04:29.613 sys 0m0.677s 00:04:29.613 04:00:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.613 ************************************ 00:04:29.613 END TEST no_shrink_alloc 00:04:29.613 ************************************ 00:04:29.613 04:00:31 -- common/autotest_common.sh@10 -- # set +x 00:04:29.613 04:00:31 -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.613 04:00:31 -- setup/hugepages.sh@37 -- # local node hp 00:04:29.613 04:00:31 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.613 04:00:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.613 04:00:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.613 04:00:31 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.613 04:00:31 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.613 04:00:31 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.613 04:00:31 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.613 ************************************ 00:04:29.613 END TEST hugepages 00:04:29.613 ************************************ 00:04:29.613 00:04:29.613 real 0m4.831s 00:04:29.613 user 0m2.065s 00:04:29.613 sys 0m2.825s 00:04:29.613 04:00:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.613 04:00:31 -- common/autotest_common.sh@10 -- # set +x 00:04:29.613 04:00:31 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:29.613 04:00:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.613 04:00:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.613 04:00:31 -- common/autotest_common.sh@10 -- # set +x 00:04:29.613 ************************************ 00:04:29.613 START TEST driver 00:04:29.613 ************************************ 00:04:29.613 04:00:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:29.613 * Looking for test storage... 00:04:29.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.613 04:00:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:29.613 04:00:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:29.613 04:00:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:29.900 04:00:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:29.900 04:00:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:29.900 04:00:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:29.900 04:00:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:29.900 04:00:31 -- scripts/common.sh@335 -- # IFS=.-: 00:04:29.900 04:00:31 -- scripts/common.sh@335 -- # read -ra ver1 00:04:29.900 04:00:31 -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.900 04:00:31 -- scripts/common.sh@336 -- # read -ra ver2 00:04:29.900 04:00:31 -- scripts/common.sh@337 -- # local 'op=<' 00:04:29.900 04:00:31 -- scripts/common.sh@339 -- # ver1_l=2 00:04:29.900 04:00:31 -- scripts/common.sh@340 -- # ver2_l=1 00:04:29.900 04:00:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:29.900 04:00:31 -- scripts/common.sh@343 -- # case "$op" in 00:04:29.900 04:00:31 -- scripts/common.sh@344 -- # : 1 00:04:29.900 04:00:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:29.900 04:00:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.900 04:00:31 -- scripts/common.sh@364 -- # decimal 1 00:04:29.900 04:00:31 -- scripts/common.sh@352 -- # local d=1 00:04:29.900 04:00:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.900 04:00:31 -- scripts/common.sh@354 -- # echo 1 00:04:29.900 04:00:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:29.900 04:00:31 -- scripts/common.sh@365 -- # decimal 2 00:04:29.900 04:00:31 -- scripts/common.sh@352 -- # local d=2 00:04:29.900 04:00:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.900 04:00:31 -- scripts/common.sh@354 -- # echo 2 00:04:29.900 04:00:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:29.900 04:00:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:29.900 04:00:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:29.900 04:00:31 -- scripts/common.sh@367 -- # return 0 00:04:29.900 04:00:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.900 04:00:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 04:00:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 04:00:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 04:00:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 04:00:31 -- setup/driver.sh@68 -- # setup reset 00:04:29.900 04:00:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.900 04:00:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.480 04:00:37 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.480 04:00:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.480 04:00:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.480 04:00:37 -- common/autotest_common.sh@10 -- # set +x 00:04:36.480 ************************************ 00:04:36.480 START TEST guess_driver 00:04:36.480 ************************************ 00:04:36.480 04:00:37 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:36.480 04:00:37 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.480 04:00:37 -- setup/driver.sh@47 -- # local fail=0 00:04:36.480 04:00:37 -- setup/driver.sh@49 -- # pick_driver 00:04:36.480 04:00:37 -- setup/driver.sh@36 -- # vfio 00:04:36.480 04:00:37 -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.480 04:00:37 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.480 04:00:37 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.480 04:00:37 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.480 04:00:37 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:36.480 04:00:37 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:36.480 04:00:37 -- setup/driver.sh@32 -- # return 1 00:04:36.480 04:00:37 -- setup/driver.sh@38 -- # uio 00:04:36.480 04:00:37 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:36.480 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:36.480 04:00:37 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:36.480 Looking for driver=uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:36.480 04:00:37 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.480 04:00:37 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:36.480 04:00:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.480 04:00:37 -- setup/driver.sh@45 -- # setup output config 00:04:36.480 04:00:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.480 04:00:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.480 04:00:38 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:36.480 04:00:38 -- setup/driver.sh@58 -- # continue 00:04:36.480 04:00:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.480 04:00:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.480 04:00:38 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.480 04:00:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.480 04:00:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.480 04:00:38 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.480 04:00:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.480 04:00:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.480 04:00:38 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.480 04:00:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.742 04:00:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.742 04:00:38 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:36.742 04:00:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.742 04:00:38 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:36.742 04:00:38 -- setup/driver.sh@65 -- # setup reset 00:04:36.742 04:00:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.742 04:00:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.367 ************************************ 00:04:43.367 END TEST guess_driver 00:04:43.367 ************************************ 00:04:43.367 00:04:43.367 real 0m6.989s 00:04:43.367 user 0m0.702s 00:04:43.367 sys 0m1.294s 00:04:43.367 04:00:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.367 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.367 00:04:43.367 real 0m12.998s 00:04:43.367 user 0m1.070s 00:04:43.367 sys 0m1.994s 00:04:43.367 04:00:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.367 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.367 ************************************ 00:04:43.367 END TEST driver 00:04:43.367 ************************************ 00:04:43.367 04:00:44 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.367 04:00:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.367 04:00:44 -- common/autotest_common.sh@10 -- # set +x 00:04:43.367 ************************************ 00:04:43.367 START TEST devices 00:04:43.367 ************************************ 00:04:43.367 04:00:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.367 * Looking for test storage... 00:04:43.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.367 04:00:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.367 04:00:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.367 04:00:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.367 04:00:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.367 04:00:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.367 04:00:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.367 04:00:44 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.367 04:00:44 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.367 04:00:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.367 04:00:44 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.367 04:00:44 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.367 04:00:44 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.367 04:00:44 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.367 04:00:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.367 04:00:44 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.367 04:00:44 -- scripts/common.sh@344 -- # : 1 00:04:43.367 04:00:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.367 04:00:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.367 04:00:44 -- scripts/common.sh@364 -- # decimal 1 00:04:43.367 04:00:44 -- scripts/common.sh@352 -- # local d=1 00:04:43.367 04:00:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.367 04:00:44 -- scripts/common.sh@354 -- # echo 1 00:04:43.367 04:00:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.367 04:00:44 -- scripts/common.sh@365 -- # decimal 2 00:04:43.367 04:00:44 -- scripts/common.sh@352 -- # local d=2 00:04:43.367 04:00:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.367 04:00:44 -- scripts/common.sh@354 -- # echo 2 00:04:43.367 04:00:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.367 04:00:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.367 04:00:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.367 04:00:44 -- scripts/common.sh@367 -- # return 0 00:04:43.367 04:00:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.367 --rc genhtml_branch_coverage=1 00:04:43.367 --rc genhtml_function_coverage=1 00:04:43.367 --rc genhtml_legend=1 00:04:43.367 --rc geninfo_all_blocks=1 00:04:43.367 --rc geninfo_unexecuted_blocks=1 00:04:43.367 00:04:43.367 ' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.367 --rc genhtml_branch_coverage=1 00:04:43.367 --rc genhtml_function_coverage=1 00:04:43.367 --rc genhtml_legend=1 00:04:43.367 --rc geninfo_all_blocks=1 00:04:43.367 --rc geninfo_unexecuted_blocks=1 00:04:43.367 00:04:43.367 ' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.367 --rc genhtml_branch_coverage=1 00:04:43.367 --rc genhtml_function_coverage=1 00:04:43.367 --rc genhtml_legend=1 00:04:43.367 --rc geninfo_all_blocks=1 00:04:43.367 --rc geninfo_unexecuted_blocks=1 00:04:43.367 00:04:43.367 ' 00:04:43.367 04:00:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.367 --rc genhtml_branch_coverage=1 00:04:43.367 --rc genhtml_function_coverage=1 00:04:43.367 --rc genhtml_legend=1 00:04:43.367 --rc geninfo_all_blocks=1 00:04:43.367 --rc geninfo_unexecuted_blocks=1 00:04:43.367 00:04:43.367 ' 00:04:43.367 04:00:44 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:43.367 04:00:44 -- setup/devices.sh@192 -- # setup reset 00:04:43.367 04:00:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.367 04:00:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.962 04:00:45 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:43.962 04:00:45 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:43.962 04:00:45 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:43.962 04:00:45 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:43.962 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:43.962 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:43.962 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:43.962 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:43.962 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.962 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.962 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:43.963 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:43.963 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:43.963 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.963 04:00:45 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:43.963 04:00:45 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:43.963 04:00:45 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:43.963 04:00:45 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:43.963 04:00:45 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:43.963 04:00:45 -- setup/devices.sh@196 -- # blocks=() 00:04:43.963 04:00:45 -- setup/devices.sh@196 -- # declare -a blocks 00:04:43.963 04:00:45 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:43.963 04:00:45 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:43.963 04:00:45 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:43.963 04:00:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.963 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:43.963 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:43.963 04:00:45 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:43.963 04:00:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:43.963 04:00:45 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:43.963 04:00:45 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:43.963 04:00:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:43.963 No valid GPT data, bailing 00:04:43.963 04:00:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.963 04:00:45 -- scripts/common.sh@393 -- # pt= 00:04:43.963 04:00:45 -- scripts/common.sh@394 -- # return 1 00:04:43.963 04:00:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:43.963 04:00:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:43.963 04:00:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:43.963 04:00:45 -- setup/common.sh@80 -- # echo 1073741824 00:04:43.963 04:00:45 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:43.963 04:00:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.963 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:43.963 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:43.963 04:00:45 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:43.963 04:00:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:43.963 04:00:45 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:43.963 04:00:45 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:43.963 04:00:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:44.224 No valid GPT data, bailing 00:04:44.224 04:00:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.224 04:00:45 -- scripts/common.sh@393 -- # pt= 00:04:44.224 04:00:45 -- scripts/common.sh@394 -- # return 1 00:04:44.224 04:00:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:44.225 04:00:45 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:44.225 04:00:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:44.225 04:00:45 -- setup/common.sh@80 -- # echo 4294967296 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:44.225 04:00:45 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.225 04:00:45 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:44.225 04:00:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:44.225 04:00:45 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:44.225 04:00:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:44.225 04:00:45 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:44.225 04:00:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:44.225 No valid GPT data, bailing 00:04:44.225 04:00:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:44.225 04:00:45 -- scripts/common.sh@393 -- # pt= 00:04:44.225 04:00:45 -- scripts/common.sh@394 -- # return 1 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:44.225 04:00:45 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:44.225 04:00:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:44.225 04:00:45 -- setup/common.sh@80 -- # echo 4294967296 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:44.225 04:00:45 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.225 04:00:45 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:44.225 04:00:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:44.225 04:00:45 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:44.225 04:00:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:44.225 04:00:45 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:44.225 04:00:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:44.225 No valid GPT data, bailing 00:04:44.225 04:00:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:44.225 04:00:45 -- scripts/common.sh@393 -- # pt= 00:04:44.225 04:00:45 -- scripts/common.sh@394 -- # return 1 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:44.225 04:00:45 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:44.225 04:00:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:44.225 04:00:45 -- setup/common.sh@80 -- # echo 4294967296 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:44.225 04:00:45 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.225 04:00:45 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:44.225 04:00:45 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:44.225 04:00:45 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:44.225 04:00:45 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:44.225 04:00:45 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:44.225 04:00:45 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:44.225 04:00:45 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:44.225 04:00:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:44.486 No valid GPT data, bailing 00:04:44.486 04:00:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:44.486 04:00:46 -- scripts/common.sh@393 -- # pt= 00:04:44.486 04:00:46 -- scripts/common.sh@394 -- # return 1 00:04:44.486 04:00:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:44.486 04:00:46 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:44.486 04:00:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:44.486 04:00:46 -- setup/common.sh@80 -- # echo 6343335936 00:04:44.486 04:00:46 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:44.486 04:00:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.486 04:00:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:44.486 04:00:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.486 04:00:46 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:44.486 04:00:46 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:44.486 04:00:46 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:44.486 04:00:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:44.486 04:00:46 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:44.486 04:00:46 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:44.486 04:00:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:44.486 No valid GPT data, bailing 00:04:44.486 04:00:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:44.486 04:00:46 -- scripts/common.sh@393 -- # pt= 00:04:44.486 04:00:46 -- scripts/common.sh@394 -- # return 1 00:04:44.486 04:00:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:44.486 04:00:46 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:44.486 04:00:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:44.486 04:00:46 -- setup/common.sh@80 -- # echo 5368709120 00:04:44.486 04:00:46 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:44.486 04:00:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.486 04:00:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:44.486 04:00:46 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:44.486 04:00:46 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:44.486 04:00:46 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:44.486 04:00:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.486 04:00:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.486 04:00:46 -- common/autotest_common.sh@10 -- # set +x 00:04:44.486 ************************************ 00:04:44.486 START TEST nvme_mount 00:04:44.486 ************************************ 00:04:44.486 04:00:46 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:44.486 04:00:46 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:44.486 04:00:46 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:44.486 04:00:46 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.486 04:00:46 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.486 04:00:46 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:44.486 04:00:46 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:44.486 04:00:46 -- setup/common.sh@40 -- # local part_no=1 00:04:44.486 04:00:46 -- setup/common.sh@41 -- # local size=1073741824 00:04:44.486 04:00:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.486 04:00:46 -- setup/common.sh@44 -- # parts=() 00:04:44.486 04:00:46 -- setup/common.sh@44 -- # local parts 00:04:44.486 04:00:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.486 04:00:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.486 04:00:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.486 04:00:46 -- setup/common.sh@46 -- # (( part++ )) 00:04:44.486 04:00:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.486 04:00:46 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:44.486 04:00:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:44.486 04:00:46 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:45.430 Creating new GPT entries in memory. 00:04:45.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.430 other utilities. 00:04:45.430 04:00:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.430 04:00:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.430 04:00:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.430 04:00:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.430 04:00:47 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:46.819 Creating new GPT entries in memory. 00:04:46.819 The operation has completed successfully. 00:04:46.819 04:00:48 -- setup/common.sh@57 -- # (( part++ )) 00:04:46.819 04:00:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.819 04:00:48 -- setup/common.sh@62 -- # wait 66120 00:04:46.819 04:00:48 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.819 04:00:48 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:46.819 04:00:48 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.819 04:00:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:46.819 04:00:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:46.820 04:00:48 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.820 04:00:48 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.820 04:00:48 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:46.820 04:00:48 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:46.820 04:00:48 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.820 04:00:48 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.820 04:00:48 -- setup/devices.sh@53 -- # local found=0 00:04:46.820 04:00:48 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.820 04:00:48 -- setup/devices.sh@56 -- # : 00:04:46.820 04:00:48 -- setup/devices.sh@59 -- # local pci status 00:04:46.820 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.820 04:00:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:46.820 04:00:48 -- setup/devices.sh@47 -- # setup output config 00:04:46.820 04:00:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.820 04:00:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.820 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:46.820 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.081 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.081 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.342 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.342 04:00:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:47.342 04:00:48 -- setup/devices.sh@63 -- # found=1 00:04:47.342 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.342 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.342 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.342 04:00:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.342 04:00:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.605 04:00:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:47.605 04:00:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.605 04:00:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.605 04:00:49 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:47.605 04:00:49 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.605 04:00:49 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.605 04:00:49 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:47.605 04:00:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:47.605 04:00:49 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.605 04:00:49 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.605 04:00:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:47.605 04:00:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:47.605 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.605 04:00:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:47.605 04:00:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:47.866 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.866 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:47.866 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.866 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:47.866 04:00:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:47.866 04:00:49 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:47.866 04:00:49 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:47.866 04:00:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:47.866 04:00:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:47.866 04:00:49 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.128 04:00:49 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.128 04:00:49 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:48.128 04:00:49 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:48.128 04:00:49 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.128 04:00:49 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.128 04:00:49 -- setup/devices.sh@53 -- # local found=0 00:04:48.128 04:00:49 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.128 04:00:49 -- setup/devices.sh@56 -- # : 00:04:48.128 04:00:49 -- setup/devices.sh@59 -- # local pci status 00:04:48.128 04:00:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.128 04:00:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:48.128 04:00:49 -- setup/devices.sh@47 -- # setup output config 00:04:48.128 04:00:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.128 04:00:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.128 04:00:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.128 04:00:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.389 04:00:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.389 04:00:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.389 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.389 04:00:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:48.389 04:00:50 -- setup/devices.sh@63 -- # found=1 00:04:48.389 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.389 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.389 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.649 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.649 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.649 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.650 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.911 04:00:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.911 04:00:50 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:48.911 04:00:50 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.911 04:00:50 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.911 04:00:50 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.911 04:00:50 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.911 04:00:50 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:48.911 04:00:50 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:48.911 04:00:50 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:48.911 04:00:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:48.911 04:00:50 -- setup/devices.sh@51 -- # local test_file= 00:04:48.911 04:00:50 -- setup/devices.sh@53 -- # local found=0 00:04:48.911 04:00:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:48.911 04:00:50 -- setup/devices.sh@59 -- # local pci status 00:04:48.911 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.911 04:00:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:48.911 04:00:50 -- setup/devices.sh@47 -- # setup output config 00:04:48.911 04:00:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.911 04:00:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.911 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:48.911 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.171 04:00:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:49.171 04:00:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.432 04:00:51 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:49.432 04:00:51 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:49.432 04:00:51 -- setup/devices.sh@63 -- # found=1 00:04:49.432 04:00:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.432 04:00:51 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:49.432 04:00:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.432 04:00:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:49.432 04:00:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.694 04:00:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:49.694 04:00:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.694 04:00:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.694 04:00:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.694 04:00:51 -- setup/devices.sh@68 -- # return 0 00:04:49.694 04:00:51 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:49.694 04:00:51 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.694 04:00:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:49.694 04:00:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:49.694 04:00:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:49.694 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.694 00:04:49.694 real 0m5.227s 00:04:49.694 user 0m1.011s 00:04:49.694 sys 0m1.453s 00:04:49.694 04:00:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.694 ************************************ 00:04:49.694 04:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:49.694 END TEST nvme_mount 00:04:49.694 ************************************ 00:04:49.694 04:00:51 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:49.694 04:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.694 04:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.694 04:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:49.694 ************************************ 00:04:49.694 START TEST dm_mount 00:04:49.694 ************************************ 00:04:49.694 04:00:51 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:49.694 04:00:51 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:49.694 04:00:51 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:49.694 04:00:51 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:49.694 04:00:51 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:49.694 04:00:51 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:49.694 04:00:51 -- setup/common.sh@40 -- # local part_no=2 00:04:49.694 04:00:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.694 04:00:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.694 04:00:51 -- setup/common.sh@44 -- # parts=() 00:04:49.694 04:00:51 -- setup/common.sh@44 -- # local parts 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.694 04:00:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.694 04:00:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.694 04:00:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.694 04:00:51 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:49.694 04:00:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:49.694 04:00:51 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:51.080 Creating new GPT entries in memory. 00:04:51.080 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.080 other utilities. 00:04:51.080 04:00:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.080 04:00:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.080 04:00:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.080 04:00:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.080 04:00:52 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:52.023 Creating new GPT entries in memory. 00:04:52.023 The operation has completed successfully. 00:04:52.023 04:00:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.023 04:00:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.023 04:00:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.023 04:00:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.023 04:00:53 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:52.966 The operation has completed successfully. 00:04:52.966 04:00:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.966 04:00:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.966 04:00:54 -- setup/common.sh@62 -- # wait 66748 00:04:52.966 04:00:54 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:52.966 04:00:54 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.966 04:00:54 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:52.966 04:00:54 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:52.966 04:00:54 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:52.966 04:00:54 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.966 04:00:54 -- setup/devices.sh@161 -- # break 00:04:52.966 04:00:54 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.966 04:00:54 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:52.966 04:00:54 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:52.966 04:00:54 -- setup/devices.sh@166 -- # dm=dm-0 00:04:52.966 04:00:54 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:52.966 04:00:54 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:52.966 04:00:54 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.966 04:00:54 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:52.966 04:00:54 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.966 04:00:54 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:52.966 04:00:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:52.966 04:00:54 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.227 04:00:54 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.227 04:00:54 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:53.227 04:00:54 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:53.227 04:00:54 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.227 04:00:54 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.227 04:00:54 -- setup/devices.sh@53 -- # local found=0 00:04:53.227 04:00:54 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.227 04:00:54 -- setup/devices.sh@56 -- # : 00:04:53.227 04:00:54 -- setup/devices.sh@59 -- # local pci status 00:04:53.227 04:00:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.227 04:00:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:53.227 04:00:54 -- setup/devices.sh@47 -- # setup output config 00:04:53.227 04:00:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.227 04:00:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.227 04:00:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.227 04:00:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.488 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.488 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.749 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.750 04:00:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.750 04:00:55 -- setup/devices.sh@63 -- # found=1 00:04:53.750 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.750 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.750 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.750 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.750 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.011 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.011 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.011 04:00:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.011 04:00:55 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:54.011 04:00:55 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.011 04:00:55 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.011 04:00:55 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.011 04:00:55 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.011 04:00:55 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:54.011 04:00:55 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:54.011 04:00:55 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:54.011 04:00:55 -- setup/devices.sh@50 -- # local mount_point= 00:04:54.011 04:00:55 -- setup/devices.sh@51 -- # local test_file= 00:04:54.011 04:00:55 -- setup/devices.sh@53 -- # local found=0 00:04:54.011 04:00:55 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.011 04:00:55 -- setup/devices.sh@59 -- # local pci status 00:04:54.011 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.011 04:00:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:54.011 04:00:55 -- setup/devices.sh@47 -- # setup output config 00:04:54.011 04:00:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.011 04:00:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.011 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.011 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.273 04:00:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.273 04:00:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.535 04:00:56 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.535 04:00:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:54.535 04:00:56 -- setup/devices.sh@63 -- # found=1 00:04:54.535 04:00:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.535 04:00:56 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.535 04:00:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.796 04:00:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.796 04:00:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.796 04:00:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.796 04:00:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.796 04:00:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.796 04:00:56 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.796 04:00:56 -- setup/devices.sh@68 -- # return 0 00:04:54.796 04:00:56 -- setup/devices.sh@187 -- # cleanup_dm 00:04:54.796 04:00:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.796 04:00:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.796 04:00:56 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:54.796 04:00:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:54.796 04:00:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:54.796 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.796 04:00:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:54.796 04:00:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:55.057 00:04:55.057 real 0m5.117s 00:04:55.057 user 0m0.686s 00:04:55.057 sys 0m1.024s 00:04:55.057 04:00:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.057 ************************************ 00:04:55.057 END TEST dm_mount 00:04:55.057 ************************************ 00:04:55.057 04:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.057 04:00:56 -- setup/devices.sh@1 -- # cleanup 00:04:55.057 04:00:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:55.057 04:00:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.057 04:00:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:55.057 04:00:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:55.057 04:00:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:55.057 04:00:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:55.319 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.319 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.319 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.319 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:55.319 04:00:56 -- setup/devices.sh@12 -- # cleanup_dm 00:04:55.319 04:00:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:55.319 04:00:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:55.319 04:00:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:55.319 04:00:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:55.319 04:00:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:55.319 04:00:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:55.319 00:04:55.319 real 0m12.634s 00:04:55.319 user 0m2.483s 00:04:55.319 sys 0m3.325s 00:04:55.319 ************************************ 00:04:55.319 END TEST devices 00:04:55.319 ************************************ 00:04:55.319 04:00:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.319 04:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.319 00:04:55.319 real 0m40.777s 00:04:55.319 user 0m7.833s 00:04:55.319 sys 0m11.211s 00:04:55.319 04:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.319 ************************************ 00:04:55.319 END TEST setup.sh 00:04:55.319 ************************************ 00:04:55.319 04:00:57 -- common/autotest_common.sh@10 -- # set +x 00:04:55.319 04:00:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.580 Hugepages 00:04:55.580 node hugesize free / total 00:04:55.580 node0 1048576kB 0 / 0 00:04:55.580 node0 2048kB 2048 / 2048 00:04:55.580 00:04:55.580 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.580 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.840 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:55.840 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:55.840 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:55.841 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:55.841 04:00:57 -- spdk/autotest.sh@128 -- # uname -s 00:04:55.841 04:00:57 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:55.841 04:00:57 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:55.841 04:00:57 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.227 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.227 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.227 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.227 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.227 04:00:58 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:58.170 04:00:59 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:58.170 04:00:59 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:58.170 04:00:59 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.170 04:00:59 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:58.170 04:00:59 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:58.171 04:00:59 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:58.171 04:00:59 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.171 04:00:59 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.171 04:00:59 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:58.171 04:00:59 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:58.171 04:00:59 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:58.171 04:00:59 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.744 Waiting for block devices as requested 00:04:58.744 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.006 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.006 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:59.006 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.299 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:04.299 04:01:05 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.299 04:01:05 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:04.299 04:01:05 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.299 04:01:05 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:04.299 04:01:05 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:04.299 04:01:05 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:05:04.299 04:01:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.299 04:01:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.299 04:01:05 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.299 04:01:05 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.299 04:01:05 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.299 04:01:05 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.299 04:01:05 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.299 04:01:05 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.299 04:01:05 -- common/autotest_common.sh@1552 -- # continue 00:05:04.299 04:01:05 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.299 04:01:05 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:04.299 04:01:05 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.299 04:01:05 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.300 04:01:05 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1552 -- # continue 00:05:04.300 04:01:05 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.300 04:01:05 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.300 04:01:05 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1552 -- # continue 00:05:04.300 04:01:05 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.300 04:01:05 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:05:04.300 04:01:05 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.300 04:01:05 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.300 04:01:05 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.300 04:01:05 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.300 04:01:05 -- common/autotest_common.sh@1552 -- # continue 00:05:04.300 04:01:05 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:04.300 04:01:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.300 04:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.300 04:01:06 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:04.300 04:01:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.300 04:01:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.300 04:01:06 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.555 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.555 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.555 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.555 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.555 04:01:07 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:05.555 04:01:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.555 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:05.555 04:01:07 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:05.555 04:01:07 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:05.555 04:01:07 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.555 04:01:07 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:05.556 04:01:07 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:05.556 04:01:07 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:05.556 04:01:07 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:05.556 04:01:07 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:05.556 04:01:07 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.556 04:01:07 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.556 04:01:07 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:05.817 04:01:07 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:05.817 04:01:07 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.817 04:01:07 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.817 04:01:07 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.817 04:01:07 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:05.817 04:01:07 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.817 04:01:07 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1588 -- # return 0 00:05:05.817 04:01:07 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:05.817 04:01:07 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:05.817 04:01:07 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.817 04:01:07 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.817 04:01:07 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:05.817 04:01:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.817 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 04:01:07 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.817 04:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.817 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 START TEST env 00:05:05.817 ************************************ 00:05:05.817 04:01:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.817 * Looking for test storage... 00:05:05.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.817 04:01:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.817 04:01:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.817 04:01:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.817 04:01:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.817 04:01:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.817 04:01:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.817 04:01:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.817 04:01:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.817 04:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.817 04:01:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.817 04:01:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.817 04:01:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.817 04:01:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.817 04:01:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.817 04:01:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.817 04:01:07 -- scripts/common.sh@344 -- # : 1 00:05:05.817 04:01:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.817 04:01:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.817 04:01:07 -- scripts/common.sh@364 -- # decimal 1 00:05:05.817 04:01:07 -- scripts/common.sh@352 -- # local d=1 00:05:05.817 04:01:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.817 04:01:07 -- scripts/common.sh@354 -- # echo 1 00:05:05.817 04:01:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.817 04:01:07 -- scripts/common.sh@365 -- # decimal 2 00:05:05.817 04:01:07 -- scripts/common.sh@352 -- # local d=2 00:05:05.817 04:01:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.817 04:01:07 -- scripts/common.sh@354 -- # echo 2 00:05:05.817 04:01:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.817 04:01:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.817 04:01:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.817 04:01:07 -- scripts/common.sh@367 -- # return 0 00:05:05.817 04:01:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.817 --rc genhtml_branch_coverage=1 00:05:05.817 --rc genhtml_function_coverage=1 00:05:05.817 --rc genhtml_legend=1 00:05:05.817 --rc geninfo_all_blocks=1 00:05:05.817 --rc geninfo_unexecuted_blocks=1 00:05:05.817 00:05:05.817 ' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.817 --rc genhtml_branch_coverage=1 00:05:05.817 --rc genhtml_function_coverage=1 00:05:05.817 --rc genhtml_legend=1 00:05:05.817 --rc geninfo_all_blocks=1 00:05:05.817 --rc geninfo_unexecuted_blocks=1 00:05:05.817 00:05:05.817 ' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.817 --rc genhtml_branch_coverage=1 00:05:05.817 --rc genhtml_function_coverage=1 00:05:05.817 --rc genhtml_legend=1 00:05:05.817 --rc geninfo_all_blocks=1 00:05:05.817 --rc geninfo_unexecuted_blocks=1 00:05:05.817 00:05:05.817 ' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.817 --rc genhtml_branch_coverage=1 00:05:05.817 --rc genhtml_function_coverage=1 00:05:05.817 --rc genhtml_legend=1 00:05:05.817 --rc geninfo_all_blocks=1 00:05:05.817 --rc geninfo_unexecuted_blocks=1 00:05:05.817 00:05:05.817 ' 00:05:05.817 04:01:07 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.817 04:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.817 04:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.817 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.078 ************************************ 00:05:06.078 START TEST env_memory 00:05:06.078 ************************************ 00:05:06.078 04:01:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.078 00:05:06.078 00:05:06.078 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.078 http://cunit.sourceforge.net/ 00:05:06.078 00:05:06.078 00:05:06.078 Suite: memory 00:05:06.078 Test: alloc and free memory map ...[2024-11-26 04:01:07.648251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.078 passed 00:05:06.078 Test: mem map translation ...[2024-11-26 04:01:07.690104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.078 [2024-11-26 04:01:07.690184] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.078 [2024-11-26 04:01:07.690249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.078 [2024-11-26 04:01:07.690266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.078 passed 00:05:06.078 Test: mem map registration ...[2024-11-26 04:01:07.759738] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.078 [2024-11-26 04:01:07.759816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.078 passed 00:05:06.339 Test: mem map adjacent registrations ...passed 00:05:06.339 00:05:06.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.339 suites 1 1 n/a 0 0 00:05:06.339 tests 4 4 4 0 0 00:05:06.339 asserts 152 152 152 0 n/a 00:05:06.339 00:05:06.339 Elapsed time = 0.242 seconds 00:05:06.339 00:05:06.339 real 0m0.280s 00:05:06.339 user 0m0.253s 00:05:06.339 sys 0m0.020s 00:05:06.339 04:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.339 ************************************ 00:05:06.339 END TEST env_memory 00:05:06.339 ************************************ 00:05:06.339 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.339 04:01:07 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.339 04:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.339 04:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.339 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.339 ************************************ 00:05:06.339 START TEST env_vtophys 00:05:06.339 ************************************ 00:05:06.339 04:01:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.339 EAL: lib.eal log level changed from notice to debug 00:05:06.339 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.339 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.339 EAL: Maximum logical cores by configuration: 128 00:05:06.339 EAL: Detected CPU lcores: 10 00:05:06.339 EAL: Detected NUMA nodes: 1 00:05:06.339 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:06.339 EAL: Detected shared linkage of DPDK 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:06.339 EAL: Registered [vdev] bus. 00:05:06.339 EAL: bus.vdev log level changed from disabled to notice 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:06.339 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:06.339 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:06.339 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:06.339 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.339 EAL: No shared files mode enabled, IPC is disabled 00:05:06.339 EAL: Selected IOVA mode 'PA' 00:05:06.339 EAL: Probing VFIO support... 00:05:06.339 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.339 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.339 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.339 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.339 EAL: Setting up physically contiguous memory... 00:05:06.339 EAL: Setting maximum number of open files to 524288 00:05:06.339 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.339 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.339 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.339 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.339 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.339 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.339 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.339 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.339 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.339 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.339 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.339 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.339 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.340 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.340 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.340 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.340 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.340 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.340 EAL: Hugepages will be freed exactly as allocated. 00:05:06.340 EAL: No shared files mode enabled, IPC is disabled 00:05:06.340 EAL: No shared files mode enabled, IPC is disabled 00:05:06.600 EAL: TSC frequency is ~2600000 KHz 00:05:06.600 EAL: Main lcore 0 is ready (tid=7f76d3920a40;cpuset=[0]) 00:05:06.600 EAL: Trying to obtain current memory policy. 00:05:06.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.600 EAL: Restoring previous memory policy: 0 00:05:06.600 EAL: request: mp_malloc_sync 00:05:06.600 EAL: No shared files mode enabled, IPC is disabled 00:05:06.600 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.600 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.600 EAL: No shared files mode enabled, IPC is disabled 00:05:06.600 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.600 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.600 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.600 00:05:06.600 00:05:06.600 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.600 http://cunit.sourceforge.net/ 00:05:06.600 00:05:06.600 00:05:06.600 Suite: components_suite 00:05:06.861 Test: vtophys_malloc_test ...passed 00:05:06.861 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.861 EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.861 EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.861 EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.861 EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.861 EAL: Trying to obtain current memory policy. 00:05:06.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.861 EAL: Restoring previous memory policy: 4 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.861 EAL: request: mp_malloc_sync 00:05:06.861 EAL: No shared files mode enabled, IPC is disabled 00:05:06.861 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.861 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.123 EAL: request: mp_malloc_sync 00:05:07.123 EAL: No shared files mode enabled, IPC is disabled 00:05:07.123 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.123 EAL: Trying to obtain current memory policy. 00:05:07.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.123 EAL: Restoring previous memory policy: 4 00:05:07.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.123 EAL: request: mp_malloc_sync 00:05:07.123 EAL: No shared files mode enabled, IPC is disabled 00:05:07.123 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.123 EAL: request: mp_malloc_sync 00:05:07.123 EAL: No shared files mode enabled, IPC is disabled 00:05:07.123 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.123 EAL: Trying to obtain current memory policy. 00:05:07.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.123 EAL: Restoring previous memory policy: 4 00:05:07.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.123 EAL: request: mp_malloc_sync 00:05:07.123 EAL: No shared files mode enabled, IPC is disabled 00:05:07.123 EAL: Heap on socket 0 was expanded by 258MB 00:05:07.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.384 EAL: request: mp_malloc_sync 00:05:07.384 EAL: No shared files mode enabled, IPC is disabled 00:05:07.384 EAL: Heap on socket 0 was shrunk by 258MB 00:05:07.384 EAL: Trying to obtain current memory policy. 00:05:07.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.384 EAL: Restoring previous memory policy: 4 00:05:07.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.384 EAL: request: mp_malloc_sync 00:05:07.384 EAL: No shared files mode enabled, IPC is disabled 00:05:07.384 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.650 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.650 EAL: request: mp_malloc_sync 00:05:07.650 EAL: No shared files mode enabled, IPC is disabled 00:05:07.650 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.650 EAL: Trying to obtain current memory policy. 00:05:07.650 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.221 EAL: Restoring previous memory policy: 4 00:05:08.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.221 EAL: request: mp_malloc_sync 00:05:08.221 EAL: No shared files mode enabled, IPC is disabled 00:05:08.221 EAL: Heap on socket 0 was expanded by 1026MB 00:05:08.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.479 passed 00:05:08.479 00:05:08.479 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.479 suites 1 1 n/a 0 0 00:05:08.479 tests 2 2 2 0 0 00:05:08.479 asserts 5281 5281 5281 0 n/a 00:05:08.479 00:05:08.479 Elapsed time = 1.972 seconds 00:05:08.479 EAL: request: mp_malloc_sync 00:05:08.479 EAL: No shared files mode enabled, IPC is disabled 00:05:08.479 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.479 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.479 EAL: request: mp_malloc_sync 00:05:08.479 EAL: No shared files mode enabled, IPC is disabled 00:05:08.479 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.479 EAL: No shared files mode enabled, IPC is disabled 00:05:08.479 EAL: No shared files mode enabled, IPC is disabled 00:05:08.479 EAL: No shared files mode enabled, IPC is disabled 00:05:08.479 00:05:08.479 real 0m2.222s 00:05:08.479 user 0m0.989s 00:05:08.479 sys 0m1.082s 00:05:08.479 04:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.479 ************************************ 00:05:08.479 END TEST env_vtophys 00:05:08.480 ************************************ 00:05:08.480 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.480 04:01:10 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.480 04:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.480 04:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.480 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.480 ************************************ 00:05:08.480 START TEST env_pci 00:05:08.480 ************************************ 00:05:08.480 04:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.480 00:05:08.480 00:05:08.480 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.480 http://cunit.sourceforge.net/ 00:05:08.480 00:05:08.480 00:05:08.480 Suite: pci 00:05:08.480 Test: pci_hook ...[2024-11-26 04:01:10.240939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68436 has claimed it 00:05:08.738 passed 00:05:08.738 00:05:08.738 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.738 suites 1 1 n/a 0 0 00:05:08.738 tests 1 1 1 0 0 00:05:08.738 asserts 25 25 25 0 n/a 00:05:08.738 00:05:08.738 Elapsed time = 0.006 seconds 00:05:08.738 EAL: Cannot find device (10000:00:01.0) 00:05:08.738 EAL: Failed to attach device on primary process 00:05:08.738 ************************************ 00:05:08.738 END TEST env_pci 00:05:08.738 ************************************ 00:05:08.738 00:05:08.738 real 0m0.058s 00:05:08.738 user 0m0.032s 00:05:08.738 sys 0m0.025s 00:05:08.738 04:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.738 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.738 04:01:10 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.738 04:01:10 -- env/env.sh@15 -- # uname 00:05:08.738 04:01:10 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.738 04:01:10 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.738 04:01:10 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.738 04:01:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:08.738 04:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.738 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.738 ************************************ 00:05:08.738 START TEST env_dpdk_post_init 00:05:08.738 ************************************ 00:05:08.738 04:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.738 EAL: Detected CPU lcores: 10 00:05:08.738 EAL: Detected NUMA nodes: 1 00:05:08.738 EAL: Detected shared linkage of DPDK 00:05:08.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.738 EAL: Selected IOVA mode 'PA' 00:05:08.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.997 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:08.997 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:08.997 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:08.997 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:08.997 Starting DPDK initialization... 00:05:08.997 Starting SPDK post initialization... 00:05:08.997 SPDK NVMe probe 00:05:08.997 Attaching to 0000:00:06.0 00:05:08.997 Attaching to 0000:00:07.0 00:05:08.997 Attaching to 0000:00:08.0 00:05:08.997 Attaching to 0000:00:09.0 00:05:08.997 Attached to 0000:00:09.0 00:05:08.997 Attached to 0000:00:06.0 00:05:08.997 Attached to 0000:00:07.0 00:05:08.997 Attached to 0000:00:08.0 00:05:08.997 Cleaning up... 00:05:08.997 00:05:08.997 real 0m0.214s 00:05:08.997 user 0m0.049s 00:05:08.997 sys 0m0.067s 00:05:08.997 04:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.997 ************************************ 00:05:08.997 END TEST env_dpdk_post_init 00:05:08.997 ************************************ 00:05:08.997 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.997 04:01:10 -- env/env.sh@26 -- # uname 00:05:08.997 04:01:10 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.997 04:01:10 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.997 04:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.997 04:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.997 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.997 ************************************ 00:05:08.997 START TEST env_mem_callbacks 00:05:08.997 ************************************ 00:05:08.997 04:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.997 EAL: Detected CPU lcores: 10 00:05:08.997 EAL: Detected NUMA nodes: 1 00:05:08.997 EAL: Detected shared linkage of DPDK 00:05:08.997 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.997 EAL: Selected IOVA mode 'PA' 00:05:08.997 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.997 00:05:08.997 00:05:08.997 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.997 http://cunit.sourceforge.net/ 00:05:08.997 00:05:08.997 00:05:08.997 Suite: memory 00:05:08.997 Test: test ... 00:05:08.997 register 0x200000200000 2097152 00:05:08.997 malloc 3145728 00:05:08.997 register 0x200000400000 4194304 00:05:08.997 buf 0x200000500000 len 3145728 PASSED 00:05:08.997 malloc 64 00:05:08.997 buf 0x2000004fff40 len 64 PASSED 00:05:08.997 malloc 4194304 00:05:08.997 register 0x200000800000 6291456 00:05:09.256 buf 0x200000a00000 len 4194304 PASSED 00:05:09.256 free 0x200000500000 3145728 00:05:09.256 free 0x2000004fff40 64 00:05:09.256 unregister 0x200000400000 4194304 PASSED 00:05:09.256 free 0x200000a00000 4194304 00:05:09.256 unregister 0x200000800000 6291456 PASSED 00:05:09.256 malloc 8388608 00:05:09.256 register 0x200000400000 10485760 00:05:09.256 buf 0x200000600000 len 8388608 PASSED 00:05:09.256 free 0x200000600000 8388608 00:05:09.256 unregister 0x200000400000 10485760 PASSED 00:05:09.256 passed 00:05:09.256 00:05:09.256 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.256 suites 1 1 n/a 0 0 00:05:09.256 tests 1 1 1 0 0 00:05:09.256 asserts 15 15 15 0 n/a 00:05:09.256 00:05:09.256 Elapsed time = 0.007 seconds 00:05:09.256 00:05:09.256 real 0m0.167s 00:05:09.256 user 0m0.022s 00:05:09.256 sys 0m0.044s 00:05:09.256 04:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.256 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.256 ************************************ 00:05:09.256 END TEST env_mem_callbacks 00:05:09.256 ************************************ 00:05:09.256 ************************************ 00:05:09.256 END TEST env 00:05:09.256 ************************************ 00:05:09.256 00:05:09.256 real 0m3.399s 00:05:09.256 user 0m1.508s 00:05:09.256 sys 0m1.457s 00:05:09.256 04:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.256 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.256 04:01:10 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.256 04:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.256 04:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.256 04:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:09.256 ************************************ 00:05:09.256 START TEST rpc 00:05:09.256 ************************************ 00:05:09.256 04:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.256 * Looking for test storage... 00:05:09.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.256 04:01:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:09.256 04:01:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:09.256 04:01:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:09.514 04:01:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:09.514 04:01:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:09.514 04:01:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:09.514 04:01:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:09.514 04:01:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:09.514 04:01:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:09.514 04:01:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.514 04:01:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:09.514 04:01:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:09.514 04:01:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:09.514 04:01:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:09.514 04:01:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:09.514 04:01:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:09.514 04:01:11 -- scripts/common.sh@344 -- # : 1 00:05:09.514 04:01:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:09.514 04:01:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.514 04:01:11 -- scripts/common.sh@364 -- # decimal 1 00:05:09.514 04:01:11 -- scripts/common.sh@352 -- # local d=1 00:05:09.514 04:01:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.514 04:01:11 -- scripts/common.sh@354 -- # echo 1 00:05:09.514 04:01:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:09.514 04:01:11 -- scripts/common.sh@365 -- # decimal 2 00:05:09.514 04:01:11 -- scripts/common.sh@352 -- # local d=2 00:05:09.514 04:01:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.514 04:01:11 -- scripts/common.sh@354 -- # echo 2 00:05:09.514 04:01:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:09.514 04:01:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:09.514 04:01:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:09.514 04:01:11 -- scripts/common.sh@367 -- # return 0 00:05:09.514 04:01:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.514 04:01:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:09.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.514 --rc genhtml_branch_coverage=1 00:05:09.514 --rc genhtml_function_coverage=1 00:05:09.514 --rc genhtml_legend=1 00:05:09.514 --rc geninfo_all_blocks=1 00:05:09.514 --rc geninfo_unexecuted_blocks=1 00:05:09.514 00:05:09.514 ' 00:05:09.514 04:01:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:09.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.514 --rc genhtml_branch_coverage=1 00:05:09.514 --rc genhtml_function_coverage=1 00:05:09.514 --rc genhtml_legend=1 00:05:09.514 --rc geninfo_all_blocks=1 00:05:09.514 --rc geninfo_unexecuted_blocks=1 00:05:09.514 00:05:09.514 ' 00:05:09.514 04:01:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:09.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.514 --rc genhtml_branch_coverage=1 00:05:09.514 --rc genhtml_function_coverage=1 00:05:09.514 --rc genhtml_legend=1 00:05:09.514 --rc geninfo_all_blocks=1 00:05:09.514 --rc geninfo_unexecuted_blocks=1 00:05:09.514 00:05:09.514 ' 00:05:09.514 04:01:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:09.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.514 --rc genhtml_branch_coverage=1 00:05:09.514 --rc genhtml_function_coverage=1 00:05:09.514 --rc genhtml_legend=1 00:05:09.514 --rc geninfo_all_blocks=1 00:05:09.514 --rc geninfo_unexecuted_blocks=1 00:05:09.514 00:05:09.514 ' 00:05:09.514 04:01:11 -- rpc/rpc.sh@65 -- # spdk_pid=68557 00:05:09.514 04:01:11 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.514 04:01:11 -- rpc/rpc.sh@67 -- # waitforlisten 68557 00:05:09.514 04:01:11 -- common/autotest_common.sh@829 -- # '[' -z 68557 ']' 00:05:09.514 04:01:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.514 04:01:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.514 04:01:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.514 04:01:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.514 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.514 04:01:11 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:09.514 [2024-11-26 04:01:11.111909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.514 [2024-11-26 04:01:11.112043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68557 ] 00:05:09.514 [2024-11-26 04:01:11.257905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.772 [2024-11-26 04:01:11.298829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.772 [2024-11-26 04:01:11.299023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.773 [2024-11-26 04:01:11.299039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68557' to capture a snapshot of events at runtime. 00:05:09.773 [2024-11-26 04:01:11.299048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68557 for offline analysis/debug. 00:05:09.773 [2024-11-26 04:01:11.299083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.342 04:01:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.342 04:01:11 -- common/autotest_common.sh@862 -- # return 0 00:05:10.342 04:01:11 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.342 04:01:11 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.342 04:01:11 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.342 04:01:11 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.342 04:01:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.342 04:01:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.342 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 ************************************ 00:05:10.342 START TEST rpc_integrity 00:05:10.342 ************************************ 00:05:10.342 04:01:11 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:10.342 04:01:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.342 04:01:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.342 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 04:01:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.342 04:01:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.342 04:01:11 -- rpc/rpc.sh@13 -- # jq length 00:05:10.342 04:01:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.342 04:01:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.342 04:01:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.342 04:01:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 04:01:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.342 04:01:11 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.342 04:01:12 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.342 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.342 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.342 04:01:12 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.342 { 00:05:10.342 "name": "Malloc0", 00:05:10.342 "aliases": [ 00:05:10.342 "7e0b1ea6-fd81-4bb1-ba66-dc2fcb62a0c4" 00:05:10.342 ], 00:05:10.342 "product_name": "Malloc disk", 00:05:10.342 "block_size": 512, 00:05:10.342 "num_blocks": 16384, 00:05:10.342 "uuid": "7e0b1ea6-fd81-4bb1-ba66-dc2fcb62a0c4", 00:05:10.342 "assigned_rate_limits": { 00:05:10.342 "rw_ios_per_sec": 0, 00:05:10.342 "rw_mbytes_per_sec": 0, 00:05:10.342 "r_mbytes_per_sec": 0, 00:05:10.342 "w_mbytes_per_sec": 0 00:05:10.342 }, 00:05:10.342 "claimed": false, 00:05:10.342 "zoned": false, 00:05:10.342 "supported_io_types": { 00:05:10.342 "read": true, 00:05:10.342 "write": true, 00:05:10.342 "unmap": true, 00:05:10.342 "write_zeroes": true, 00:05:10.342 "flush": true, 00:05:10.342 "reset": true, 00:05:10.342 "compare": false, 00:05:10.342 "compare_and_write": false, 00:05:10.342 "abort": true, 00:05:10.342 "nvme_admin": false, 00:05:10.342 "nvme_io": false 00:05:10.342 }, 00:05:10.342 "memory_domains": [ 00:05:10.342 { 00:05:10.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.342 "dma_device_type": 2 00:05:10.342 } 00:05:10.342 ], 00:05:10.342 "driver_specific": {} 00:05:10.342 } 00:05:10.342 ]' 00:05:10.342 04:01:12 -- rpc/rpc.sh@17 -- # jq length 00:05:10.342 04:01:12 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.342 04:01:12 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.342 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.342 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 [2024-11-26 04:01:12.053965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.342 [2024-11-26 04:01:12.054061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.342 [2024-11-26 04:01:12.054094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:10.342 [2024-11-26 04:01:12.054111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.342 [2024-11-26 04:01:12.056811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.342 [2024-11-26 04:01:12.056868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.342 Passthru0 00:05:10.342 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.343 04:01:12 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.343 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.343 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.343 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.343 04:01:12 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.343 { 00:05:10.343 "name": "Malloc0", 00:05:10.343 "aliases": [ 00:05:10.343 "7e0b1ea6-fd81-4bb1-ba66-dc2fcb62a0c4" 00:05:10.343 ], 00:05:10.343 "product_name": "Malloc disk", 00:05:10.343 "block_size": 512, 00:05:10.343 "num_blocks": 16384, 00:05:10.343 "uuid": "7e0b1ea6-fd81-4bb1-ba66-dc2fcb62a0c4", 00:05:10.343 "assigned_rate_limits": { 00:05:10.343 "rw_ios_per_sec": 0, 00:05:10.343 "rw_mbytes_per_sec": 0, 00:05:10.343 "r_mbytes_per_sec": 0, 00:05:10.343 "w_mbytes_per_sec": 0 00:05:10.343 }, 00:05:10.343 "claimed": true, 00:05:10.343 "claim_type": "exclusive_write", 00:05:10.343 "zoned": false, 00:05:10.343 "supported_io_types": { 00:05:10.343 "read": true, 00:05:10.343 "write": true, 00:05:10.343 "unmap": true, 00:05:10.343 "write_zeroes": true, 00:05:10.343 "flush": true, 00:05:10.343 "reset": true, 00:05:10.343 "compare": false, 00:05:10.343 "compare_and_write": false, 00:05:10.343 "abort": true, 00:05:10.343 "nvme_admin": false, 00:05:10.343 "nvme_io": false 00:05:10.343 }, 00:05:10.343 "memory_domains": [ 00:05:10.343 { 00:05:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.343 "dma_device_type": 2 00:05:10.343 } 00:05:10.343 ], 00:05:10.343 "driver_specific": {} 00:05:10.343 }, 00:05:10.343 { 00:05:10.343 "name": "Passthru0", 00:05:10.343 "aliases": [ 00:05:10.343 "9c660f51-1bfe-58ae-9f16-5f2519469a5c" 00:05:10.343 ], 00:05:10.343 "product_name": "passthru", 00:05:10.343 "block_size": 512, 00:05:10.343 "num_blocks": 16384, 00:05:10.343 "uuid": "9c660f51-1bfe-58ae-9f16-5f2519469a5c", 00:05:10.343 "assigned_rate_limits": { 00:05:10.343 "rw_ios_per_sec": 0, 00:05:10.343 "rw_mbytes_per_sec": 0, 00:05:10.343 "r_mbytes_per_sec": 0, 00:05:10.343 "w_mbytes_per_sec": 0 00:05:10.343 }, 00:05:10.343 "claimed": false, 00:05:10.343 "zoned": false, 00:05:10.343 "supported_io_types": { 00:05:10.343 "read": true, 00:05:10.343 "write": true, 00:05:10.343 "unmap": true, 00:05:10.343 "write_zeroes": true, 00:05:10.343 "flush": true, 00:05:10.343 "reset": true, 00:05:10.343 "compare": false, 00:05:10.343 "compare_and_write": false, 00:05:10.343 "abort": true, 00:05:10.343 "nvme_admin": false, 00:05:10.343 "nvme_io": false 00:05:10.343 }, 00:05:10.343 "memory_domains": [ 00:05:10.343 { 00:05:10.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.343 "dma_device_type": 2 00:05:10.343 } 00:05:10.343 ], 00:05:10.343 "driver_specific": { 00:05:10.343 "passthru": { 00:05:10.343 "name": "Passthru0", 00:05:10.343 "base_bdev_name": "Malloc0" 00:05:10.343 } 00:05:10.343 } 00:05:10.343 } 00:05:10.343 ]' 00:05:10.343 04:01:12 -- rpc/rpc.sh@21 -- # jq length 00:05:10.604 04:01:12 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.604 04:01:12 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.604 04:01:12 -- rpc/rpc.sh@26 -- # jq length 00:05:10.604 04:01:12 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.604 00:05:10.604 real 0m0.234s 00:05:10.604 user 0m0.134s 00:05:10.604 sys 0m0.034s 00:05:10.604 04:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.604 ************************************ 00:05:10.604 END TEST rpc_integrity 00:05:10.604 ************************************ 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.604 04:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.604 04:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 ************************************ 00:05:10.604 START TEST rpc_plugins 00:05:10.604 ************************************ 00:05:10.604 04:01:12 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:10.604 04:01:12 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.604 04:01:12 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.604 { 00:05:10.604 "name": "Malloc1", 00:05:10.604 "aliases": [ 00:05:10.604 "03a20637-5a6c-4c10-936d-fc3f768f8a95" 00:05:10.604 ], 00:05:10.604 "product_name": "Malloc disk", 00:05:10.604 "block_size": 4096, 00:05:10.604 "num_blocks": 256, 00:05:10.604 "uuid": "03a20637-5a6c-4c10-936d-fc3f768f8a95", 00:05:10.604 "assigned_rate_limits": { 00:05:10.604 "rw_ios_per_sec": 0, 00:05:10.604 "rw_mbytes_per_sec": 0, 00:05:10.604 "r_mbytes_per_sec": 0, 00:05:10.604 "w_mbytes_per_sec": 0 00:05:10.604 }, 00:05:10.604 "claimed": false, 00:05:10.604 "zoned": false, 00:05:10.604 "supported_io_types": { 00:05:10.604 "read": true, 00:05:10.604 "write": true, 00:05:10.604 "unmap": true, 00:05:10.604 "write_zeroes": true, 00:05:10.604 "flush": true, 00:05:10.604 "reset": true, 00:05:10.604 "compare": false, 00:05:10.604 "compare_and_write": false, 00:05:10.604 "abort": true, 00:05:10.604 "nvme_admin": false, 00:05:10.604 "nvme_io": false 00:05:10.604 }, 00:05:10.604 "memory_domains": [ 00:05:10.604 { 00:05:10.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.604 "dma_device_type": 2 00:05:10.604 } 00:05:10.604 ], 00:05:10.604 "driver_specific": {} 00:05:10.604 } 00:05:10.604 ]' 00:05:10.604 04:01:12 -- rpc/rpc.sh@32 -- # jq length 00:05:10.604 04:01:12 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.604 04:01:12 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.604 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.604 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.604 04:01:12 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.604 04:01:12 -- rpc/rpc.sh@36 -- # jq length 00:05:10.866 04:01:12 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.866 00:05:10.866 real 0m0.124s 00:05:10.866 user 0m0.062s 00:05:10.866 sys 0m0.019s 00:05:10.866 04:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.866 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.866 ************************************ 00:05:10.866 END TEST rpc_plugins 00:05:10.866 ************************************ 00:05:10.866 04:01:12 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.866 04:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.866 04:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.866 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.866 ************************************ 00:05:10.866 START TEST rpc_trace_cmd_test 00:05:10.866 ************************************ 00:05:10.867 04:01:12 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:10.867 04:01:12 -- rpc/rpc.sh@40 -- # local info 00:05:10.867 04:01:12 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.867 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.867 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.867 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.867 04:01:12 -- rpc/rpc.sh@42 -- # info='{ 00:05:10.867 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68557", 00:05:10.867 "tpoint_group_mask": "0x8", 00:05:10.867 "iscsi_conn": { 00:05:10.867 "mask": "0x2", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "scsi": { 00:05:10.867 "mask": "0x4", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "bdev": { 00:05:10.867 "mask": "0x8", 00:05:10.867 "tpoint_mask": "0xffffffffffffffff" 00:05:10.867 }, 00:05:10.867 "nvmf_rdma": { 00:05:10.867 "mask": "0x10", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "nvmf_tcp": { 00:05:10.867 "mask": "0x20", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "ftl": { 00:05:10.867 "mask": "0x40", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "blobfs": { 00:05:10.867 "mask": "0x80", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "dsa": { 00:05:10.867 "mask": "0x200", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "thread": { 00:05:10.867 "mask": "0x400", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "nvme_pcie": { 00:05:10.867 "mask": "0x800", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "iaa": { 00:05:10.867 "mask": "0x1000", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "nvme_tcp": { 00:05:10.867 "mask": "0x2000", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 }, 00:05:10.867 "bdev_nvme": { 00:05:10.867 "mask": "0x4000", 00:05:10.867 "tpoint_mask": "0x0" 00:05:10.867 } 00:05:10.867 }' 00:05:10.867 04:01:12 -- rpc/rpc.sh@43 -- # jq length 00:05:10.867 04:01:12 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:10.867 04:01:12 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.867 04:01:12 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.867 04:01:12 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.867 04:01:12 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.867 04:01:12 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.867 04:01:12 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.867 04:01:12 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.867 04:01:12 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.867 00:05:10.867 real 0m0.172s 00:05:10.867 user 0m0.136s 00:05:10.867 sys 0m0.025s 00:05:10.867 04:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.867 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.867 ************************************ 00:05:10.867 END TEST rpc_trace_cmd_test 00:05:10.867 ************************************ 00:05:11.129 04:01:12 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.129 04:01:12 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.129 04:01:12 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.129 04:01:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.129 04:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.129 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.129 ************************************ 00:05:11.129 START TEST rpc_daemon_integrity 00:05:11.129 ************************************ 00:05:11.129 04:01:12 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:11.129 04:01:12 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.129 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.129 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.129 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.129 04:01:12 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.129 04:01:12 -- rpc/rpc.sh@13 -- # jq length 00:05:11.129 04:01:12 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.129 04:01:12 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.129 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.129 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.129 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.129 04:01:12 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.129 04:01:12 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.129 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.129 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.129 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.129 04:01:12 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.129 { 00:05:11.129 "name": "Malloc2", 00:05:11.129 "aliases": [ 00:05:11.129 "b76a82bc-1469-4ce0-9e01-aaa36b5259c1" 00:05:11.129 ], 00:05:11.129 "product_name": "Malloc disk", 00:05:11.129 "block_size": 512, 00:05:11.129 "num_blocks": 16384, 00:05:11.129 "uuid": "b76a82bc-1469-4ce0-9e01-aaa36b5259c1", 00:05:11.129 "assigned_rate_limits": { 00:05:11.129 "rw_ios_per_sec": 0, 00:05:11.129 "rw_mbytes_per_sec": 0, 00:05:11.129 "r_mbytes_per_sec": 0, 00:05:11.129 "w_mbytes_per_sec": 0 00:05:11.129 }, 00:05:11.129 "claimed": false, 00:05:11.129 "zoned": false, 00:05:11.129 "supported_io_types": { 00:05:11.129 "read": true, 00:05:11.129 "write": true, 00:05:11.129 "unmap": true, 00:05:11.129 "write_zeroes": true, 00:05:11.129 "flush": true, 00:05:11.129 "reset": true, 00:05:11.129 "compare": false, 00:05:11.129 "compare_and_write": false, 00:05:11.129 "abort": true, 00:05:11.129 "nvme_admin": false, 00:05:11.129 "nvme_io": false 00:05:11.129 }, 00:05:11.129 "memory_domains": [ 00:05:11.129 { 00:05:11.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.129 "dma_device_type": 2 00:05:11.129 } 00:05:11.129 ], 00:05:11.129 "driver_specific": {} 00:05:11.129 } 00:05:11.129 ]' 00:05:11.129 04:01:12 -- rpc/rpc.sh@17 -- # jq length 00:05:11.129 04:01:12 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.130 04:01:12 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.130 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.130 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 [2024-11-26 04:01:12.780425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.130 [2024-11-26 04:01:12.780528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.130 [2024-11-26 04:01:12.780553] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:11.130 [2024-11-26 04:01:12.780566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.130 [2024-11-26 04:01:12.783201] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.130 [2024-11-26 04:01:12.783259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.130 Passthru0 00:05:11.130 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.130 04:01:12 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.130 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.130 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.130 04:01:12 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.130 { 00:05:11.130 "name": "Malloc2", 00:05:11.130 "aliases": [ 00:05:11.130 "b76a82bc-1469-4ce0-9e01-aaa36b5259c1" 00:05:11.130 ], 00:05:11.130 "product_name": "Malloc disk", 00:05:11.130 "block_size": 512, 00:05:11.130 "num_blocks": 16384, 00:05:11.130 "uuid": "b76a82bc-1469-4ce0-9e01-aaa36b5259c1", 00:05:11.130 "assigned_rate_limits": { 00:05:11.130 "rw_ios_per_sec": 0, 00:05:11.130 "rw_mbytes_per_sec": 0, 00:05:11.130 "r_mbytes_per_sec": 0, 00:05:11.130 "w_mbytes_per_sec": 0 00:05:11.130 }, 00:05:11.130 "claimed": true, 00:05:11.130 "claim_type": "exclusive_write", 00:05:11.130 "zoned": false, 00:05:11.130 "supported_io_types": { 00:05:11.130 "read": true, 00:05:11.130 "write": true, 00:05:11.130 "unmap": true, 00:05:11.130 "write_zeroes": true, 00:05:11.130 "flush": true, 00:05:11.130 "reset": true, 00:05:11.130 "compare": false, 00:05:11.130 "compare_and_write": false, 00:05:11.130 "abort": true, 00:05:11.130 "nvme_admin": false, 00:05:11.130 "nvme_io": false 00:05:11.130 }, 00:05:11.130 "memory_domains": [ 00:05:11.130 { 00:05:11.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.130 "dma_device_type": 2 00:05:11.130 } 00:05:11.130 ], 00:05:11.130 "driver_specific": {} 00:05:11.130 }, 00:05:11.130 { 00:05:11.130 "name": "Passthru0", 00:05:11.130 "aliases": [ 00:05:11.130 "54535e4a-cf84-5a6f-94ce-31e20b47db92" 00:05:11.130 ], 00:05:11.130 "product_name": "passthru", 00:05:11.130 "block_size": 512, 00:05:11.130 "num_blocks": 16384, 00:05:11.130 "uuid": "54535e4a-cf84-5a6f-94ce-31e20b47db92", 00:05:11.130 "assigned_rate_limits": { 00:05:11.130 "rw_ios_per_sec": 0, 00:05:11.130 "rw_mbytes_per_sec": 0, 00:05:11.130 "r_mbytes_per_sec": 0, 00:05:11.130 "w_mbytes_per_sec": 0 00:05:11.130 }, 00:05:11.130 "claimed": false, 00:05:11.130 "zoned": false, 00:05:11.130 "supported_io_types": { 00:05:11.130 "read": true, 00:05:11.130 "write": true, 00:05:11.130 "unmap": true, 00:05:11.130 "write_zeroes": true, 00:05:11.130 "flush": true, 00:05:11.130 "reset": true, 00:05:11.130 "compare": false, 00:05:11.130 "compare_and_write": false, 00:05:11.130 "abort": true, 00:05:11.130 "nvme_admin": false, 00:05:11.130 "nvme_io": false 00:05:11.130 }, 00:05:11.130 "memory_domains": [ 00:05:11.130 { 00:05:11.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.130 "dma_device_type": 2 00:05:11.130 } 00:05:11.130 ], 00:05:11.130 "driver_specific": { 00:05:11.130 "passthru": { 00:05:11.130 "name": "Passthru0", 00:05:11.130 "base_bdev_name": "Malloc2" 00:05:11.130 } 00:05:11.130 } 00:05:11.130 } 00:05:11.130 ]' 00:05:11.130 04:01:12 -- rpc/rpc.sh@21 -- # jq length 00:05:11.130 04:01:12 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.130 04:01:12 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.130 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.130 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.130 04:01:12 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.130 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.130 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.130 04:01:12 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.130 04:01:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.130 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.130 04:01:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.130 04:01:12 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.130 04:01:12 -- rpc/rpc.sh@26 -- # jq length 00:05:11.391 04:01:12 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.391 00:05:11.391 real 0m0.234s 00:05:11.391 user 0m0.131s 00:05:11.391 sys 0m0.034s 00:05:11.391 04:01:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.391 ************************************ 00:05:11.391 END TEST rpc_daemon_integrity 00:05:11.391 ************************************ 00:05:11.391 04:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.391 04:01:12 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.391 04:01:12 -- rpc/rpc.sh@84 -- # killprocess 68557 00:05:11.391 04:01:12 -- common/autotest_common.sh@936 -- # '[' -z 68557 ']' 00:05:11.391 04:01:12 -- common/autotest_common.sh@940 -- # kill -0 68557 00:05:11.391 04:01:12 -- common/autotest_common.sh@941 -- # uname 00:05:11.391 04:01:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.391 04:01:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68557 00:05:11.391 04:01:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.391 04:01:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.391 killing process with pid 68557 00:05:11.391 04:01:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68557' 00:05:11.391 04:01:12 -- common/autotest_common.sh@955 -- # kill 68557 00:05:11.391 04:01:12 -- common/autotest_common.sh@960 -- # wait 68557 00:05:11.961 00:05:11.962 real 0m2.553s 00:05:11.962 user 0m2.961s 00:05:11.962 sys 0m0.629s 00:05:11.962 04:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.962 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 END TEST rpc 00:05:11.962 ************************************ 00:05:11.962 04:01:13 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.962 04:01:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.962 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 START TEST rpc_client 00:05:11.962 ************************************ 00:05:11.962 04:01:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.962 * Looking for test storage... 00:05:11.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:11.962 04:01:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.962 04:01:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.962 04:01:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.962 04:01:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.962 04:01:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.962 04:01:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.962 04:01:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.962 04:01:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.962 04:01:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.962 04:01:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.962 04:01:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.962 04:01:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.962 04:01:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.962 04:01:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.962 04:01:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.962 04:01:13 -- scripts/common.sh@344 -- # : 1 00:05:11.962 04:01:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.962 04:01:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.962 04:01:13 -- scripts/common.sh@364 -- # decimal 1 00:05:11.962 04:01:13 -- scripts/common.sh@352 -- # local d=1 00:05:11.962 04:01:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.962 04:01:13 -- scripts/common.sh@354 -- # echo 1 00:05:11.962 04:01:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.962 04:01:13 -- scripts/common.sh@365 -- # decimal 2 00:05:11.962 04:01:13 -- scripts/common.sh@352 -- # local d=2 00:05:11.962 04:01:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.962 04:01:13 -- scripts/common.sh@354 -- # echo 2 00:05:11.962 04:01:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.962 04:01:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.962 04:01:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.962 04:01:13 -- scripts/common.sh@367 -- # return 0 00:05:11.962 04:01:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.962 --rc genhtml_branch_coverage=1 00:05:11.962 --rc genhtml_function_coverage=1 00:05:11.962 --rc genhtml_legend=1 00:05:11.962 --rc geninfo_all_blocks=1 00:05:11.962 --rc geninfo_unexecuted_blocks=1 00:05:11.962 00:05:11.962 ' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.962 --rc genhtml_branch_coverage=1 00:05:11.962 --rc genhtml_function_coverage=1 00:05:11.962 --rc genhtml_legend=1 00:05:11.962 --rc geninfo_all_blocks=1 00:05:11.962 --rc geninfo_unexecuted_blocks=1 00:05:11.962 00:05:11.962 ' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.962 --rc genhtml_branch_coverage=1 00:05:11.962 --rc genhtml_function_coverage=1 00:05:11.962 --rc genhtml_legend=1 00:05:11.962 --rc geninfo_all_blocks=1 00:05:11.962 --rc geninfo_unexecuted_blocks=1 00:05:11.962 00:05:11.962 ' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.962 --rc genhtml_branch_coverage=1 00:05:11.962 --rc genhtml_function_coverage=1 00:05:11.962 --rc genhtml_legend=1 00:05:11.962 --rc geninfo_all_blocks=1 00:05:11.962 --rc geninfo_unexecuted_blocks=1 00:05:11.962 00:05:11.962 ' 00:05:11.962 04:01:13 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:11.962 OK 00:05:11.962 04:01:13 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.962 00:05:11.962 real 0m0.182s 00:05:11.962 user 0m0.102s 00:05:11.962 sys 0m0.087s 00:05:11.962 04:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.962 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 END TEST rpc_client 00:05:11.962 ************************************ 00:05:11.962 04:01:13 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.962 04:01:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.962 04:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.962 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:11.962 ************************************ 00:05:11.962 START TEST json_config 00:05:11.962 ************************************ 00:05:11.962 04:01:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:12.224 04:01:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.224 04:01:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.224 04:01:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.224 04:01:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.224 04:01:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.224 04:01:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.224 04:01:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.224 04:01:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.224 04:01:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.224 04:01:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.224 04:01:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.224 04:01:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.224 04:01:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.224 04:01:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.224 04:01:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.224 04:01:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.224 04:01:13 -- scripts/common.sh@344 -- # : 1 00:05:12.224 04:01:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.224 04:01:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.224 04:01:13 -- scripts/common.sh@364 -- # decimal 1 00:05:12.224 04:01:13 -- scripts/common.sh@352 -- # local d=1 00:05:12.224 04:01:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.224 04:01:13 -- scripts/common.sh@354 -- # echo 1 00:05:12.224 04:01:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.224 04:01:13 -- scripts/common.sh@365 -- # decimal 2 00:05:12.224 04:01:13 -- scripts/common.sh@352 -- # local d=2 00:05:12.224 04:01:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.224 04:01:13 -- scripts/common.sh@354 -- # echo 2 00:05:12.224 04:01:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.224 04:01:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.224 04:01:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.224 04:01:13 -- scripts/common.sh@367 -- # return 0 00:05:12.224 04:01:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.224 04:01:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.224 --rc genhtml_branch_coverage=1 00:05:12.224 --rc genhtml_function_coverage=1 00:05:12.224 --rc genhtml_legend=1 00:05:12.224 --rc geninfo_all_blocks=1 00:05:12.224 --rc geninfo_unexecuted_blocks=1 00:05:12.224 00:05:12.224 ' 00:05:12.224 04:01:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.224 --rc genhtml_branch_coverage=1 00:05:12.224 --rc genhtml_function_coverage=1 00:05:12.225 --rc genhtml_legend=1 00:05:12.225 --rc geninfo_all_blocks=1 00:05:12.225 --rc geninfo_unexecuted_blocks=1 00:05:12.225 00:05:12.225 ' 00:05:12.225 04:01:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.225 --rc genhtml_branch_coverage=1 00:05:12.225 --rc genhtml_function_coverage=1 00:05:12.225 --rc genhtml_legend=1 00:05:12.225 --rc geninfo_all_blocks=1 00:05:12.225 --rc geninfo_unexecuted_blocks=1 00:05:12.225 00:05:12.225 ' 00:05:12.225 04:01:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.225 --rc genhtml_branch_coverage=1 00:05:12.225 --rc genhtml_function_coverage=1 00:05:12.225 --rc genhtml_legend=1 00:05:12.225 --rc geninfo_all_blocks=1 00:05:12.225 --rc geninfo_unexecuted_blocks=1 00:05:12.225 00:05:12.225 ' 00:05:12.225 04:01:13 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.225 04:01:13 -- nvmf/common.sh@7 -- # uname -s 00:05:12.225 04:01:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.225 04:01:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.225 04:01:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.225 04:01:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.225 04:01:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.225 04:01:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.225 04:01:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.225 04:01:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.225 04:01:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.225 04:01:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.225 04:01:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ce961f53-3d34-4579-a607-64c3d960c355 00:05:12.225 04:01:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=ce961f53-3d34-4579-a607-64c3d960c355 00:05:12.225 04:01:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.225 04:01:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.225 04:01:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.225 04:01:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.225 04:01:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.225 04:01:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.225 04:01:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.225 04:01:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.225 04:01:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.225 04:01:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.225 04:01:13 -- paths/export.sh@5 -- # export PATH 00:05:12.225 04:01:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.225 04:01:13 -- nvmf/common.sh@46 -- # : 0 00:05:12.225 04:01:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:12.225 04:01:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:12.225 04:01:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:12.225 04:01:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.225 04:01:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.225 04:01:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:12.225 04:01:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:12.225 04:01:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:12.225 04:01:13 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:12.225 04:01:13 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:12.225 04:01:13 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:12.225 04:01:13 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.225 04:01:13 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:12.225 WARNING: No tests are enabled so not running JSON configuration tests 00:05:12.225 04:01:13 -- json_config/json_config.sh@27 -- # exit 0 00:05:12.225 00:05:12.225 real 0m0.140s 00:05:12.225 user 0m0.091s 00:05:12.225 sys 0m0.052s 00:05:12.225 04:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.225 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.225 ************************************ 00:05:12.225 END TEST json_config 00:05:12.225 ************************************ 00:05:12.225 04:01:13 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.225 04:01:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.225 04:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.225 04:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.225 ************************************ 00:05:12.225 START TEST json_config_extra_key 00:05:12.225 ************************************ 00:05:12.225 04:01:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.225 04:01:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:12.225 04:01:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:12.225 04:01:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:12.487 04:01:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:12.487 04:01:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:12.487 04:01:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:12.487 04:01:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:12.487 04:01:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:12.487 04:01:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:12.487 04:01:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.487 04:01:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:12.487 04:01:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:12.487 04:01:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:12.487 04:01:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:12.487 04:01:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:12.487 04:01:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:12.487 04:01:14 -- scripts/common.sh@344 -- # : 1 00:05:12.487 04:01:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:12.487 04:01:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.487 04:01:14 -- scripts/common.sh@364 -- # decimal 1 00:05:12.487 04:01:14 -- scripts/common.sh@352 -- # local d=1 00:05:12.487 04:01:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.487 04:01:14 -- scripts/common.sh@354 -- # echo 1 00:05:12.487 04:01:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.487 04:01:14 -- scripts/common.sh@365 -- # decimal 2 00:05:12.487 04:01:14 -- scripts/common.sh@352 -- # local d=2 00:05:12.487 04:01:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.487 04:01:14 -- scripts/common.sh@354 -- # echo 2 00:05:12.487 04:01:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.487 04:01:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.487 04:01:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.487 04:01:14 -- scripts/common.sh@367 -- # return 0 00:05:12.487 04:01:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.487 04:01:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.487 --rc genhtml_branch_coverage=1 00:05:12.487 --rc genhtml_function_coverage=1 00:05:12.487 --rc genhtml_legend=1 00:05:12.487 --rc geninfo_all_blocks=1 00:05:12.487 --rc geninfo_unexecuted_blocks=1 00:05:12.487 00:05:12.487 ' 00:05:12.487 04:01:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.487 --rc genhtml_branch_coverage=1 00:05:12.487 --rc genhtml_function_coverage=1 00:05:12.487 --rc genhtml_legend=1 00:05:12.487 --rc geninfo_all_blocks=1 00:05:12.487 --rc geninfo_unexecuted_blocks=1 00:05:12.487 00:05:12.487 ' 00:05:12.487 04:01:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.487 --rc genhtml_branch_coverage=1 00:05:12.487 --rc genhtml_function_coverage=1 00:05:12.487 --rc genhtml_legend=1 00:05:12.487 --rc geninfo_all_blocks=1 00:05:12.487 --rc geninfo_unexecuted_blocks=1 00:05:12.487 00:05:12.487 ' 00:05:12.487 04:01:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.487 --rc genhtml_branch_coverage=1 00:05:12.487 --rc genhtml_function_coverage=1 00:05:12.487 --rc genhtml_legend=1 00:05:12.487 --rc geninfo_all_blocks=1 00:05:12.487 --rc geninfo_unexecuted_blocks=1 00:05:12.487 00:05:12.487 ' 00:05:12.487 04:01:14 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.487 04:01:14 -- nvmf/common.sh@7 -- # uname -s 00:05:12.487 04:01:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.487 04:01:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.487 04:01:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.487 04:01:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.487 04:01:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.487 04:01:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.488 04:01:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.488 04:01:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.488 04:01:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.488 04:01:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.488 04:01:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ce961f53-3d34-4579-a607-64c3d960c355 00:05:12.488 04:01:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=ce961f53-3d34-4579-a607-64c3d960c355 00:05:12.488 04:01:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.488 04:01:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.488 04:01:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.488 04:01:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.488 04:01:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.488 04:01:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.488 04:01:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.488 04:01:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.488 04:01:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.488 04:01:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.488 04:01:14 -- paths/export.sh@5 -- # export PATH 00:05:12.488 04:01:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.488 04:01:14 -- nvmf/common.sh@46 -- # : 0 00:05:12.488 04:01:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:12.488 04:01:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:12.488 04:01:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:12.488 04:01:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.488 04:01:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.488 04:01:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:12.488 04:01:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:12.488 04:01:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:12.488 INFO: launching applications... 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68851 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:12.488 Waiting for target to run... 00:05:12.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68851 /var/tmp/spdk_tgt.sock 00:05:12.488 04:01:14 -- common/autotest_common.sh@829 -- # '[' -z 68851 ']' 00:05:12.488 04:01:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.488 04:01:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.488 04:01:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.488 04:01:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.488 04:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.488 04:01:14 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.488 [2024-11-26 04:01:14.136420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.488 [2024-11-26 04:01:14.136588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68851 ] 00:05:13.056 [2024-11-26 04:01:14.570271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.056 [2024-11-26 04:01:14.588872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.056 [2024-11-26 04:01:14.589050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.317 00:05:13.317 INFO: shutting down applications... 00:05:13.317 04:01:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.317 04:01:15 -- common/autotest_common.sh@862 -- # return 0 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68851 ]] 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68851 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68851 00:05:13.317 04:01:15 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68851 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:13.888 SPDK target shutdown done 00:05:13.888 Success 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:13.888 04:01:15 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:13.888 ************************************ 00:05:13.888 END TEST json_config_extra_key 00:05:13.888 ************************************ 00:05:13.888 00:05:13.888 real 0m1.638s 00:05:13.888 user 0m1.264s 00:05:13.888 sys 0m0.493s 00:05:13.888 04:01:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.888 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:13.888 04:01:15 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.888 04:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.888 04:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.888 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:13.888 ************************************ 00:05:13.888 START TEST alias_rpc 00:05:13.888 ************************************ 00:05:13.888 04:01:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.888 * Looking for test storage... 00:05:14.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:14.148 04:01:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.148 04:01:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.148 04:01:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.148 04:01:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.148 04:01:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.148 04:01:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.148 04:01:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.148 04:01:15 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.148 04:01:15 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.148 04:01:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.148 04:01:15 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.148 04:01:15 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.148 04:01:15 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.148 04:01:15 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.148 04:01:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.148 04:01:15 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.148 04:01:15 -- scripts/common.sh@344 -- # : 1 00:05:14.148 04:01:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.148 04:01:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.149 04:01:15 -- scripts/common.sh@364 -- # decimal 1 00:05:14.149 04:01:15 -- scripts/common.sh@352 -- # local d=1 00:05:14.149 04:01:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.149 04:01:15 -- scripts/common.sh@354 -- # echo 1 00:05:14.149 04:01:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.149 04:01:15 -- scripts/common.sh@365 -- # decimal 2 00:05:14.149 04:01:15 -- scripts/common.sh@352 -- # local d=2 00:05:14.149 04:01:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.149 04:01:15 -- scripts/common.sh@354 -- # echo 2 00:05:14.149 04:01:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.149 04:01:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.149 04:01:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.149 04:01:15 -- scripts/common.sh@367 -- # return 0 00:05:14.149 04:01:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.149 04:01:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.149 --rc genhtml_branch_coverage=1 00:05:14.149 --rc genhtml_function_coverage=1 00:05:14.149 --rc genhtml_legend=1 00:05:14.149 --rc geninfo_all_blocks=1 00:05:14.149 --rc geninfo_unexecuted_blocks=1 00:05:14.149 00:05:14.149 ' 00:05:14.149 04:01:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.149 --rc genhtml_branch_coverage=1 00:05:14.149 --rc genhtml_function_coverage=1 00:05:14.149 --rc genhtml_legend=1 00:05:14.149 --rc geninfo_all_blocks=1 00:05:14.149 --rc geninfo_unexecuted_blocks=1 00:05:14.149 00:05:14.149 ' 00:05:14.149 04:01:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.149 --rc genhtml_branch_coverage=1 00:05:14.149 --rc genhtml_function_coverage=1 00:05:14.149 --rc genhtml_legend=1 00:05:14.149 --rc geninfo_all_blocks=1 00:05:14.149 --rc geninfo_unexecuted_blocks=1 00:05:14.149 00:05:14.149 ' 00:05:14.149 04:01:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.149 --rc genhtml_branch_coverage=1 00:05:14.149 --rc genhtml_function_coverage=1 00:05:14.149 --rc genhtml_legend=1 00:05:14.149 --rc geninfo_all_blocks=1 00:05:14.149 --rc geninfo_unexecuted_blocks=1 00:05:14.149 00:05:14.149 ' 00:05:14.149 04:01:15 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.149 04:01:15 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68929 00:05:14.149 04:01:15 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68929 00:05:14.149 04:01:15 -- common/autotest_common.sh@829 -- # '[' -z 68929 ']' 00:05:14.149 04:01:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.149 04:01:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.149 04:01:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.149 04:01:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.149 04:01:15 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.149 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.149 [2024-11-26 04:01:15.799620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.149 [2024-11-26 04:01:15.800236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68929 ] 00:05:14.410 [2024-11-26 04:01:15.950337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.410 [2024-11-26 04:01:15.982401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.410 [2024-11-26 04:01:15.982770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.002 04:01:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.002 04:01:16 -- common/autotest_common.sh@862 -- # return 0 00:05:15.002 04:01:16 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.260 04:01:16 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68929 00:05:15.260 04:01:16 -- common/autotest_common.sh@936 -- # '[' -z 68929 ']' 00:05:15.260 04:01:16 -- common/autotest_common.sh@940 -- # kill -0 68929 00:05:15.260 04:01:16 -- common/autotest_common.sh@941 -- # uname 00:05:15.260 04:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.260 04:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68929 00:05:15.260 killing process with pid 68929 00:05:15.260 04:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.260 04:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.260 04:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68929' 00:05:15.260 04:01:16 -- common/autotest_common.sh@955 -- # kill 68929 00:05:15.260 04:01:16 -- common/autotest_common.sh@960 -- # wait 68929 00:05:15.519 ************************************ 00:05:15.519 END TEST alias_rpc 00:05:15.519 ************************************ 00:05:15.519 00:05:15.519 real 0m1.512s 00:05:15.519 user 0m1.600s 00:05:15.519 sys 0m0.387s 00:05:15.519 04:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.519 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.519 04:01:17 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:15.519 04:01:17 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.519 04:01:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.519 04:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.519 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.519 ************************************ 00:05:15.519 START TEST spdkcli_tcp 00:05:15.519 ************************************ 00:05:15.519 04:01:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.519 * Looking for test storage... 00:05:15.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.519 04:01:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.519 04:01:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.519 04:01:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.519 04:01:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.519 04:01:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.519 04:01:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.519 04:01:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.519 04:01:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.519 04:01:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.519 04:01:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.519 04:01:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.519 04:01:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.519 04:01:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.519 04:01:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.519 04:01:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.519 04:01:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.519 04:01:17 -- scripts/common.sh@344 -- # : 1 00:05:15.519 04:01:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.519 04:01:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.519 04:01:17 -- scripts/common.sh@364 -- # decimal 1 00:05:15.519 04:01:17 -- scripts/common.sh@352 -- # local d=1 00:05:15.519 04:01:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.519 04:01:17 -- scripts/common.sh@354 -- # echo 1 00:05:15.519 04:01:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.778 04:01:17 -- scripts/common.sh@365 -- # decimal 2 00:05:15.778 04:01:17 -- scripts/common.sh@352 -- # local d=2 00:05:15.778 04:01:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.778 04:01:17 -- scripts/common.sh@354 -- # echo 2 00:05:15.778 04:01:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.778 04:01:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.778 04:01:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.778 04:01:17 -- scripts/common.sh@367 -- # return 0 00:05:15.778 04:01:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.778 04:01:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.778 --rc genhtml_branch_coverage=1 00:05:15.778 --rc genhtml_function_coverage=1 00:05:15.778 --rc genhtml_legend=1 00:05:15.778 --rc geninfo_all_blocks=1 00:05:15.778 --rc geninfo_unexecuted_blocks=1 00:05:15.778 00:05:15.778 ' 00:05:15.778 04:01:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.778 --rc genhtml_branch_coverage=1 00:05:15.778 --rc genhtml_function_coverage=1 00:05:15.778 --rc genhtml_legend=1 00:05:15.778 --rc geninfo_all_blocks=1 00:05:15.778 --rc geninfo_unexecuted_blocks=1 00:05:15.778 00:05:15.778 ' 00:05:15.778 04:01:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.778 --rc genhtml_branch_coverage=1 00:05:15.778 --rc genhtml_function_coverage=1 00:05:15.778 --rc genhtml_legend=1 00:05:15.778 --rc geninfo_all_blocks=1 00:05:15.778 --rc geninfo_unexecuted_blocks=1 00:05:15.778 00:05:15.778 ' 00:05:15.778 04:01:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.778 --rc genhtml_branch_coverage=1 00:05:15.778 --rc genhtml_function_coverage=1 00:05:15.778 --rc genhtml_legend=1 00:05:15.778 --rc geninfo_all_blocks=1 00:05:15.778 --rc geninfo_unexecuted_blocks=1 00:05:15.778 00:05:15.778 ' 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.778 04:01:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.778 04:01:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.778 04:01:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.778 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69002 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 69002 00:05:15.778 04:01:17 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.778 04:01:17 -- common/autotest_common.sh@829 -- # '[' -z 69002 ']' 00:05:15.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.778 04:01:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.778 04:01:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.778 04:01:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.778 04:01:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.778 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:05:15.778 [2024-11-26 04:01:17.370308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.778 [2024-11-26 04:01:17.370421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69002 ] 00:05:15.778 [2024-11-26 04:01:17.522208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.037 [2024-11-26 04:01:17.554498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.037 [2024-11-26 04:01:17.554961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.037 [2024-11-26 04:01:17.554980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.604 04:01:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.604 04:01:18 -- common/autotest_common.sh@862 -- # return 0 00:05:16.604 04:01:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.604 04:01:18 -- spdkcli/tcp.sh@31 -- # socat_pid=69019 00:05:16.604 04:01:18 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.604 [ 00:05:16.604 "bdev_malloc_delete", 00:05:16.604 "bdev_malloc_create", 00:05:16.604 "bdev_null_resize", 00:05:16.604 "bdev_null_delete", 00:05:16.604 "bdev_null_create", 00:05:16.604 "bdev_nvme_cuse_unregister", 00:05:16.604 "bdev_nvme_cuse_register", 00:05:16.604 "bdev_opal_new_user", 00:05:16.604 "bdev_opal_set_lock_state", 00:05:16.604 "bdev_opal_delete", 00:05:16.604 "bdev_opal_get_info", 00:05:16.604 "bdev_opal_create", 00:05:16.604 "bdev_nvme_opal_revert", 00:05:16.604 "bdev_nvme_opal_init", 00:05:16.604 "bdev_nvme_send_cmd", 00:05:16.604 "bdev_nvme_get_path_iostat", 00:05:16.604 "bdev_nvme_get_mdns_discovery_info", 00:05:16.604 "bdev_nvme_stop_mdns_discovery", 00:05:16.604 "bdev_nvme_start_mdns_discovery", 00:05:16.604 "bdev_nvme_set_multipath_policy", 00:05:16.604 "bdev_nvme_set_preferred_path", 00:05:16.604 "bdev_nvme_get_io_paths", 00:05:16.604 "bdev_nvme_remove_error_injection", 00:05:16.604 "bdev_nvme_add_error_injection", 00:05:16.604 "bdev_nvme_get_discovery_info", 00:05:16.604 "bdev_nvme_stop_discovery", 00:05:16.604 "bdev_nvme_start_discovery", 00:05:16.604 "bdev_nvme_get_controller_health_info", 00:05:16.604 "bdev_nvme_disable_controller", 00:05:16.604 "bdev_nvme_enable_controller", 00:05:16.604 "bdev_nvme_reset_controller", 00:05:16.604 "bdev_nvme_get_transport_statistics", 00:05:16.604 "bdev_nvme_apply_firmware", 00:05:16.604 "bdev_nvme_detach_controller", 00:05:16.604 "bdev_nvme_get_controllers", 00:05:16.604 "bdev_nvme_attach_controller", 00:05:16.604 "bdev_nvme_set_hotplug", 00:05:16.604 "bdev_nvme_set_options", 00:05:16.604 "bdev_passthru_delete", 00:05:16.604 "bdev_passthru_create", 00:05:16.604 "bdev_lvol_grow_lvstore", 00:05:16.604 "bdev_lvol_get_lvols", 00:05:16.604 "bdev_lvol_get_lvstores", 00:05:16.604 "bdev_lvol_delete", 00:05:16.604 "bdev_lvol_set_read_only", 00:05:16.604 "bdev_lvol_resize", 00:05:16.604 "bdev_lvol_decouple_parent", 00:05:16.604 "bdev_lvol_inflate", 00:05:16.604 "bdev_lvol_rename", 00:05:16.604 "bdev_lvol_clone_bdev", 00:05:16.604 "bdev_lvol_clone", 00:05:16.604 "bdev_lvol_snapshot", 00:05:16.604 "bdev_lvol_create", 00:05:16.604 "bdev_lvol_delete_lvstore", 00:05:16.604 "bdev_lvol_rename_lvstore", 00:05:16.604 "bdev_lvol_create_lvstore", 00:05:16.604 "bdev_raid_set_options", 00:05:16.605 "bdev_raid_remove_base_bdev", 00:05:16.605 "bdev_raid_add_base_bdev", 00:05:16.605 "bdev_raid_delete", 00:05:16.605 "bdev_raid_create", 00:05:16.605 "bdev_raid_get_bdevs", 00:05:16.605 "bdev_error_inject_error", 00:05:16.605 "bdev_error_delete", 00:05:16.605 "bdev_error_create", 00:05:16.605 "bdev_split_delete", 00:05:16.605 "bdev_split_create", 00:05:16.605 "bdev_delay_delete", 00:05:16.605 "bdev_delay_create", 00:05:16.605 "bdev_delay_update_latency", 00:05:16.605 "bdev_zone_block_delete", 00:05:16.605 "bdev_zone_block_create", 00:05:16.605 "blobfs_create", 00:05:16.605 "blobfs_detect", 00:05:16.605 "blobfs_set_cache_size", 00:05:16.605 "bdev_xnvme_delete", 00:05:16.605 "bdev_xnvme_create", 00:05:16.605 "bdev_aio_delete", 00:05:16.605 "bdev_aio_rescan", 00:05:16.605 "bdev_aio_create", 00:05:16.605 "bdev_ftl_set_property", 00:05:16.605 "bdev_ftl_get_properties", 00:05:16.605 "bdev_ftl_get_stats", 00:05:16.605 "bdev_ftl_unmap", 00:05:16.605 "bdev_ftl_unload", 00:05:16.605 "bdev_ftl_delete", 00:05:16.605 "bdev_ftl_load", 00:05:16.605 "bdev_ftl_create", 00:05:16.605 "bdev_virtio_attach_controller", 00:05:16.605 "bdev_virtio_scsi_get_devices", 00:05:16.605 "bdev_virtio_detach_controller", 00:05:16.605 "bdev_virtio_blk_set_hotplug", 00:05:16.605 "bdev_iscsi_delete", 00:05:16.605 "bdev_iscsi_create", 00:05:16.605 "bdev_iscsi_set_options", 00:05:16.605 "accel_error_inject_error", 00:05:16.605 "ioat_scan_accel_module", 00:05:16.605 "dsa_scan_accel_module", 00:05:16.605 "iaa_scan_accel_module", 00:05:16.605 "iscsi_set_options", 00:05:16.605 "iscsi_get_auth_groups", 00:05:16.605 "iscsi_auth_group_remove_secret", 00:05:16.605 "iscsi_auth_group_add_secret", 00:05:16.605 "iscsi_delete_auth_group", 00:05:16.605 "iscsi_create_auth_group", 00:05:16.605 "iscsi_set_discovery_auth", 00:05:16.605 "iscsi_get_options", 00:05:16.605 "iscsi_target_node_request_logout", 00:05:16.605 "iscsi_target_node_set_redirect", 00:05:16.605 "iscsi_target_node_set_auth", 00:05:16.605 "iscsi_target_node_add_lun", 00:05:16.605 "iscsi_get_connections", 00:05:16.605 "iscsi_portal_group_set_auth", 00:05:16.605 "iscsi_start_portal_group", 00:05:16.605 "iscsi_delete_portal_group", 00:05:16.605 "iscsi_create_portal_group", 00:05:16.605 "iscsi_get_portal_groups", 00:05:16.605 "iscsi_delete_target_node", 00:05:16.605 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.605 "iscsi_target_node_add_pg_ig_maps", 00:05:16.605 "iscsi_create_target_node", 00:05:16.605 "iscsi_get_target_nodes", 00:05:16.605 "iscsi_delete_initiator_group", 00:05:16.605 "iscsi_initiator_group_remove_initiators", 00:05:16.605 "iscsi_initiator_group_add_initiators", 00:05:16.605 "iscsi_create_initiator_group", 00:05:16.605 "iscsi_get_initiator_groups", 00:05:16.605 "nvmf_set_crdt", 00:05:16.605 "nvmf_set_config", 00:05:16.605 "nvmf_set_max_subsystems", 00:05:16.605 "nvmf_subsystem_get_listeners", 00:05:16.605 "nvmf_subsystem_get_qpairs", 00:05:16.605 "nvmf_subsystem_get_controllers", 00:05:16.605 "nvmf_get_stats", 00:05:16.605 "nvmf_get_transports", 00:05:16.605 "nvmf_create_transport", 00:05:16.605 "nvmf_get_targets", 00:05:16.605 "nvmf_delete_target", 00:05:16.605 "nvmf_create_target", 00:05:16.605 "nvmf_subsystem_allow_any_host", 00:05:16.605 "nvmf_subsystem_remove_host", 00:05:16.605 "nvmf_subsystem_add_host", 00:05:16.605 "nvmf_subsystem_remove_ns", 00:05:16.605 "nvmf_subsystem_add_ns", 00:05:16.605 "nvmf_subsystem_listener_set_ana_state", 00:05:16.605 "nvmf_discovery_get_referrals", 00:05:16.605 "nvmf_discovery_remove_referral", 00:05:16.605 "nvmf_discovery_add_referral", 00:05:16.605 "nvmf_subsystem_remove_listener", 00:05:16.605 "nvmf_subsystem_add_listener", 00:05:16.605 "nvmf_delete_subsystem", 00:05:16.605 "nvmf_create_subsystem", 00:05:16.605 "nvmf_get_subsystems", 00:05:16.605 "env_dpdk_get_mem_stats", 00:05:16.605 "nbd_get_disks", 00:05:16.605 "nbd_stop_disk", 00:05:16.605 "nbd_start_disk", 00:05:16.605 "ublk_recover_disk", 00:05:16.605 "ublk_get_disks", 00:05:16.605 "ublk_stop_disk", 00:05:16.605 "ublk_start_disk", 00:05:16.605 "ublk_destroy_target", 00:05:16.605 "ublk_create_target", 00:05:16.605 "virtio_blk_create_transport", 00:05:16.605 "virtio_blk_get_transports", 00:05:16.605 "vhost_controller_set_coalescing", 00:05:16.605 "vhost_get_controllers", 00:05:16.605 "vhost_delete_controller", 00:05:16.605 "vhost_create_blk_controller", 00:05:16.605 "vhost_scsi_controller_remove_target", 00:05:16.605 "vhost_scsi_controller_add_target", 00:05:16.605 "vhost_start_scsi_controller", 00:05:16.605 "vhost_create_scsi_controller", 00:05:16.605 "thread_set_cpumask", 00:05:16.605 "framework_get_scheduler", 00:05:16.605 "framework_set_scheduler", 00:05:16.605 "framework_get_reactors", 00:05:16.605 "thread_get_io_channels", 00:05:16.605 "thread_get_pollers", 00:05:16.605 "thread_get_stats", 00:05:16.605 "framework_monitor_context_switch", 00:05:16.605 "spdk_kill_instance", 00:05:16.605 "log_enable_timestamps", 00:05:16.605 "log_get_flags", 00:05:16.605 "log_clear_flag", 00:05:16.605 "log_set_flag", 00:05:16.605 "log_get_level", 00:05:16.605 "log_set_level", 00:05:16.605 "log_get_print_level", 00:05:16.605 "log_set_print_level", 00:05:16.605 "framework_enable_cpumask_locks", 00:05:16.605 "framework_disable_cpumask_locks", 00:05:16.605 "framework_wait_init", 00:05:16.605 "framework_start_init", 00:05:16.605 "scsi_get_devices", 00:05:16.605 "bdev_get_histogram", 00:05:16.605 "bdev_enable_histogram", 00:05:16.605 "bdev_set_qos_limit", 00:05:16.605 "bdev_set_qd_sampling_period", 00:05:16.605 "bdev_get_bdevs", 00:05:16.605 "bdev_reset_iostat", 00:05:16.605 "bdev_get_iostat", 00:05:16.605 "bdev_examine", 00:05:16.605 "bdev_wait_for_examine", 00:05:16.605 "bdev_set_options", 00:05:16.605 "notify_get_notifications", 00:05:16.605 "notify_get_types", 00:05:16.605 "accel_get_stats", 00:05:16.605 "accel_set_options", 00:05:16.605 "accel_set_driver", 00:05:16.605 "accel_crypto_key_destroy", 00:05:16.605 "accel_crypto_keys_get", 00:05:16.605 "accel_crypto_key_create", 00:05:16.605 "accel_assign_opc", 00:05:16.605 "accel_get_module_info", 00:05:16.605 "accel_get_opc_assignments", 00:05:16.605 "vmd_rescan", 00:05:16.605 "vmd_remove_device", 00:05:16.605 "vmd_enable", 00:05:16.605 "sock_set_default_impl", 00:05:16.605 "sock_impl_set_options", 00:05:16.605 "sock_impl_get_options", 00:05:16.605 "iobuf_get_stats", 00:05:16.605 "iobuf_set_options", 00:05:16.605 "framework_get_pci_devices", 00:05:16.605 "framework_get_config", 00:05:16.605 "framework_get_subsystems", 00:05:16.605 "trace_get_info", 00:05:16.605 "trace_get_tpoint_group_mask", 00:05:16.605 "trace_disable_tpoint_group", 00:05:16.605 "trace_enable_tpoint_group", 00:05:16.605 "trace_clear_tpoint_mask", 00:05:16.605 "trace_set_tpoint_mask", 00:05:16.605 "spdk_get_version", 00:05:16.605 "rpc_get_methods" 00:05:16.605 ] 00:05:16.605 04:01:18 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.605 04:01:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.605 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:16.864 04:01:18 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.864 04:01:18 -- spdkcli/tcp.sh@38 -- # killprocess 69002 00:05:16.864 04:01:18 -- common/autotest_common.sh@936 -- # '[' -z 69002 ']' 00:05:16.864 04:01:18 -- common/autotest_common.sh@940 -- # kill -0 69002 00:05:16.864 04:01:18 -- common/autotest_common.sh@941 -- # uname 00:05:16.864 04:01:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.864 04:01:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69002 00:05:16.864 killing process with pid 69002 00:05:16.864 04:01:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.864 04:01:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.864 04:01:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69002' 00:05:16.864 04:01:18 -- common/autotest_common.sh@955 -- # kill 69002 00:05:16.864 04:01:18 -- common/autotest_common.sh@960 -- # wait 69002 00:05:17.123 ************************************ 00:05:17.123 END TEST spdkcli_tcp 00:05:17.123 ************************************ 00:05:17.123 00:05:17.123 real 0m1.535s 00:05:17.123 user 0m2.682s 00:05:17.123 sys 0m0.389s 00:05:17.123 04:01:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.123 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.123 04:01:18 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.123 04:01:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.123 04:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.123 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.123 ************************************ 00:05:17.123 START TEST dpdk_mem_utility 00:05:17.123 ************************************ 00:05:17.123 04:01:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.123 * Looking for test storage... 00:05:17.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.123 04:01:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.123 04:01:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.123 04:01:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.123 04:01:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.123 04:01:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.123 04:01:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.123 04:01:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.123 04:01:18 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.123 04:01:18 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.123 04:01:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.123 04:01:18 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.123 04:01:18 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.123 04:01:18 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.123 04:01:18 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.123 04:01:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.123 04:01:18 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.123 04:01:18 -- scripts/common.sh@344 -- # : 1 00:05:17.123 04:01:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.123 04:01:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.123 04:01:18 -- scripts/common.sh@364 -- # decimal 1 00:05:17.123 04:01:18 -- scripts/common.sh@352 -- # local d=1 00:05:17.123 04:01:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.123 04:01:18 -- scripts/common.sh@354 -- # echo 1 00:05:17.123 04:01:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.123 04:01:18 -- scripts/common.sh@365 -- # decimal 2 00:05:17.123 04:01:18 -- scripts/common.sh@352 -- # local d=2 00:05:17.123 04:01:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.123 04:01:18 -- scripts/common.sh@354 -- # echo 2 00:05:17.123 04:01:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.123 04:01:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.123 04:01:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.123 04:01:18 -- scripts/common.sh@367 -- # return 0 00:05:17.123 04:01:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.123 04:01:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.123 --rc genhtml_branch_coverage=1 00:05:17.123 --rc genhtml_function_coverage=1 00:05:17.123 --rc genhtml_legend=1 00:05:17.123 --rc geninfo_all_blocks=1 00:05:17.123 --rc geninfo_unexecuted_blocks=1 00:05:17.123 00:05:17.123 ' 00:05:17.123 04:01:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.123 --rc genhtml_branch_coverage=1 00:05:17.123 --rc genhtml_function_coverage=1 00:05:17.123 --rc genhtml_legend=1 00:05:17.123 --rc geninfo_all_blocks=1 00:05:17.123 --rc geninfo_unexecuted_blocks=1 00:05:17.123 00:05:17.123 ' 00:05:17.123 04:01:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.124 --rc genhtml_branch_coverage=1 00:05:17.124 --rc genhtml_function_coverage=1 00:05:17.124 --rc genhtml_legend=1 00:05:17.124 --rc geninfo_all_blocks=1 00:05:17.124 --rc geninfo_unexecuted_blocks=1 00:05:17.124 00:05:17.124 ' 00:05:17.124 04:01:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.124 --rc genhtml_branch_coverage=1 00:05:17.124 --rc genhtml_function_coverage=1 00:05:17.124 --rc genhtml_legend=1 00:05:17.124 --rc geninfo_all_blocks=1 00:05:17.124 --rc geninfo_unexecuted_blocks=1 00:05:17.124 00:05:17.124 ' 00:05:17.124 04:01:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.124 04:01:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69101 00:05:17.124 04:01:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69101 00:05:17.124 04:01:18 -- common/autotest_common.sh@829 -- # '[' -z 69101 ']' 00:05:17.124 04:01:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.124 04:01:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.124 04:01:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.124 04:01:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.124 04:01:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.124 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:17.383 [2024-11-26 04:01:18.933301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.383 [2024-11-26 04:01:18.933615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69101 ] 00:05:17.383 [2024-11-26 04:01:19.079511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.383 [2024-11-26 04:01:19.111530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.383 [2024-11-26 04:01:19.111712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.318 04:01:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.319 04:01:19 -- common/autotest_common.sh@862 -- # return 0 00:05:18.319 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.319 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.319 04:01:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.319 04:01:19 -- common/autotest_common.sh@10 -- # set +x 00:05:18.319 { 00:05:18.319 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.319 } 00:05:18.319 04:01:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.319 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:18.319 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:18.319 1 heaps totaling size 814.000000 MiB 00:05:18.319 size: 814.000000 MiB heap id: 0 00:05:18.319 end heaps---------- 00:05:18.319 8 mempools totaling size 598.116089 MiB 00:05:18.319 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.319 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.319 size: 84.521057 MiB name: bdev_io_69101 00:05:18.319 size: 51.011292 MiB name: evtpool_69101 00:05:18.319 size: 50.003479 MiB name: msgpool_69101 00:05:18.319 size: 21.763794 MiB name: PDU_Pool 00:05:18.319 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.319 size: 0.026123 MiB name: Session_Pool 00:05:18.319 end mempools------- 00:05:18.319 6 memzones totaling size 4.142822 MiB 00:05:18.319 size: 1.000366 MiB name: RG_ring_0_69101 00:05:18.319 size: 1.000366 MiB name: RG_ring_1_69101 00:05:18.319 size: 1.000366 MiB name: RG_ring_4_69101 00:05:18.319 size: 1.000366 MiB name: RG_ring_5_69101 00:05:18.319 size: 0.125366 MiB name: RG_ring_2_69101 00:05:18.319 size: 0.015991 MiB name: RG_ring_3_69101 00:05:18.319 end memzones------- 00:05:18.319 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.319 heap id: 0 total size: 814.000000 MiB number of busy elements: 307 number of free elements: 15 00:05:18.319 list of free elements. size: 12.470642 MiB 00:05:18.319 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:18.319 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:18.319 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:18.319 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:18.319 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:18.319 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:18.319 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:18.319 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:18.319 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:18.319 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:05:18.319 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:18.319 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:18.319 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:18.319 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:18.319 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:18.319 list of standard malloc elements. size: 199.266785 MiB 00:05:18.319 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:18.319 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:18.319 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:18.319 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:18.319 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:18.319 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:18.319 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:18.319 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:18.319 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:18.319 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:18.319 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:18.320 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:18.321 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:18.321 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:18.322 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:18.322 list of memzone associated elements. size: 602.262573 MiB 00:05:18.322 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:18.322 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.322 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:18.322 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.322 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:18.322 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_69101_0 00:05:18.322 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:18.322 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69101_0 00:05:18.322 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:18.322 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69101_0 00:05:18.322 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:18.322 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.322 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:18.322 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.322 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:18.322 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69101 00:05:18.322 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:18.322 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69101 00:05:18.322 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:18.322 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69101 00:05:18.322 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:18.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.322 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:18.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.322 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:18.322 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.322 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:18.322 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.322 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:18.322 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69101 00:05:18.322 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:18.322 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69101 00:05:18.322 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:18.322 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69101 00:05:18.322 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:18.322 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69101 00:05:18.322 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:18.322 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69101 00:05:18.322 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:18.322 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.322 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:18.322 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.322 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:18.322 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.322 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:18.322 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69101 00:05:18.322 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:18.322 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.322 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:18.322 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.322 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:18.322 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69101 00:05:18.322 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:18.322 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.322 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:18.322 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69101 00:05:18.322 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:18.322 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69101 00:05:18.322 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:18.322 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.322 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.322 04:01:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69101 00:05:18.322 04:01:19 -- common/autotest_common.sh@936 -- # '[' -z 69101 ']' 00:05:18.322 04:01:19 -- common/autotest_common.sh@940 -- # kill -0 69101 00:05:18.322 04:01:19 -- common/autotest_common.sh@941 -- # uname 00:05:18.322 04:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.322 04:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69101 00:05:18.322 04:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.322 04:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.322 04:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69101' 00:05:18.322 killing process with pid 69101 00:05:18.322 04:01:19 -- common/autotest_common.sh@955 -- # kill 69101 00:05:18.322 04:01:19 -- common/autotest_common.sh@960 -- # wait 69101 00:05:18.581 00:05:18.581 real 0m1.429s 00:05:18.581 user 0m1.465s 00:05:18.581 sys 0m0.367s 00:05:18.581 04:01:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.581 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.581 ************************************ 00:05:18.581 END TEST dpdk_mem_utility 00:05:18.581 ************************************ 00:05:18.581 04:01:20 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.581 04:01:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.581 04:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.581 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.581 ************************************ 00:05:18.581 START TEST event 00:05:18.581 ************************************ 00:05:18.581 04:01:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.581 * Looking for test storage... 00:05:18.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.581 04:01:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.581 04:01:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.581 04:01:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.581 04:01:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.581 04:01:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.581 04:01:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.581 04:01:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.581 04:01:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.581 04:01:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.581 04:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.581 04:01:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.581 04:01:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.581 04:01:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.581 04:01:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.581 04:01:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.581 04:01:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.581 04:01:20 -- scripts/common.sh@344 -- # : 1 00:05:18.581 04:01:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.581 04:01:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.581 04:01:20 -- scripts/common.sh@364 -- # decimal 1 00:05:18.581 04:01:20 -- scripts/common.sh@352 -- # local d=1 00:05:18.581 04:01:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.581 04:01:20 -- scripts/common.sh@354 -- # echo 1 00:05:18.581 04:01:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.581 04:01:20 -- scripts/common.sh@365 -- # decimal 2 00:05:18.581 04:01:20 -- scripts/common.sh@352 -- # local d=2 00:05:18.581 04:01:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.581 04:01:20 -- scripts/common.sh@354 -- # echo 2 00:05:18.581 04:01:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.581 04:01:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.581 04:01:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.581 04:01:20 -- scripts/common.sh@367 -- # return 0 00:05:18.581 04:01:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.581 04:01:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.581 --rc genhtml_branch_coverage=1 00:05:18.581 --rc genhtml_function_coverage=1 00:05:18.581 --rc genhtml_legend=1 00:05:18.581 --rc geninfo_all_blocks=1 00:05:18.581 --rc geninfo_unexecuted_blocks=1 00:05:18.581 00:05:18.581 ' 00:05:18.581 04:01:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.581 --rc genhtml_branch_coverage=1 00:05:18.581 --rc genhtml_function_coverage=1 00:05:18.581 --rc genhtml_legend=1 00:05:18.581 --rc geninfo_all_blocks=1 00:05:18.581 --rc geninfo_unexecuted_blocks=1 00:05:18.581 00:05:18.581 ' 00:05:18.582 04:01:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.582 --rc genhtml_branch_coverage=1 00:05:18.582 --rc genhtml_function_coverage=1 00:05:18.582 --rc genhtml_legend=1 00:05:18.582 --rc geninfo_all_blocks=1 00:05:18.582 --rc geninfo_unexecuted_blocks=1 00:05:18.582 00:05:18.582 ' 00:05:18.582 04:01:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.582 --rc genhtml_branch_coverage=1 00:05:18.582 --rc genhtml_function_coverage=1 00:05:18.582 --rc genhtml_legend=1 00:05:18.582 --rc geninfo_all_blocks=1 00:05:18.582 --rc geninfo_unexecuted_blocks=1 00:05:18.582 00:05:18.582 ' 00:05:18.582 04:01:20 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.582 04:01:20 -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.582 04:01:20 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.582 04:01:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:18.582 04:01:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.582 04:01:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.582 ************************************ 00:05:18.582 START TEST event_perf 00:05:18.582 ************************************ 00:05:18.582 04:01:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.840 Running I/O for 1 seconds...[2024-11-26 04:01:20.366566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.840 [2024-11-26 04:01:20.366765] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:05:18.840 [2024-11-26 04:01:20.514302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.840 [2024-11-26 04:01:20.548234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.840 [2024-11-26 04:01:20.548530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.840 [2024-11-26 04:01:20.548656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.840 [2024-11-26 04:01:20.548718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.214 Running I/O for 1 seconds... 00:05:20.214 lcore 0: 152302 00:05:20.214 lcore 1: 152299 00:05:20.214 lcore 2: 152300 00:05:20.214 lcore 3: 152301 00:05:20.214 done. 00:05:20.214 00:05:20.214 real 0m1.275s 00:05:20.214 user 0m4.072s 00:05:20.214 sys 0m0.086s 00:05:20.214 04:01:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.214 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.214 ************************************ 00:05:20.214 END TEST event_perf 00:05:20.214 ************************************ 00:05:20.214 04:01:21 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.214 04:01:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:20.214 04:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.214 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:20.214 ************************************ 00:05:20.214 START TEST event_reactor 00:05:20.214 ************************************ 00:05:20.214 04:01:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.214 [2024-11-26 04:01:21.682074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.214 [2024-11-26 04:01:21.682193] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69214 ] 00:05:20.214 [2024-11-26 04:01:21.838461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.214 [2024-11-26 04:01:21.878086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.588 test_start 00:05:21.588 oneshot 00:05:21.588 tick 100 00:05:21.588 tick 100 00:05:21.588 tick 250 00:05:21.588 tick 100 00:05:21.588 tick 100 00:05:21.588 tick 100 00:05:21.588 tick 250 00:05:21.588 tick 500 00:05:21.588 tick 100 00:05:21.588 tick 100 00:05:21.588 tick 250 00:05:21.588 tick 100 00:05:21.588 tick 100 00:05:21.588 test_end 00:05:21.588 ************************************ 00:05:21.588 END TEST event_reactor 00:05:21.588 ************************************ 00:05:21.588 00:05:21.588 real 0m1.293s 00:05:21.588 user 0m1.113s 00:05:21.588 sys 0m0.073s 00:05:21.588 04:01:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.588 04:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.588 04:01:22 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.588 04:01:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:21.588 04:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.588 04:01:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.588 ************************************ 00:05:21.588 START TEST event_reactor_perf 00:05:21.588 ************************************ 00:05:21.588 04:01:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.588 [2024-11-26 04:01:23.014473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.588 [2024-11-26 04:01:23.014608] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69251 ] 00:05:21.588 [2024-11-26 04:01:23.162677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.588 [2024-11-26 04:01:23.202398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.522 test_start 00:05:22.522 test_end 00:05:22.522 Performance: 313853 events per second 00:05:22.522 ************************************ 00:05:22.522 END TEST event_reactor_perf 00:05:22.522 ************************************ 00:05:22.522 00:05:22.522 real 0m1.281s 00:05:22.522 user 0m1.102s 00:05:22.522 sys 0m0.072s 00:05:22.522 04:01:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.522 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.780 04:01:24 -- event/event.sh@49 -- # uname -s 00:05:22.780 04:01:24 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.780 04:01:24 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.780 04:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.780 04:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.780 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.780 ************************************ 00:05:22.780 START TEST event_scheduler 00:05:22.780 ************************************ 00:05:22.780 04:01:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.780 * Looking for test storage... 00:05:22.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:22.780 04:01:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.780 04:01:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.780 04:01:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.780 04:01:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.780 04:01:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.780 04:01:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.780 04:01:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.780 04:01:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.780 04:01:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.780 04:01:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.780 04:01:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.780 04:01:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.780 04:01:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.780 04:01:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.780 04:01:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.780 04:01:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.780 04:01:24 -- scripts/common.sh@344 -- # : 1 00:05:22.780 04:01:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.780 04:01:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.780 04:01:24 -- scripts/common.sh@364 -- # decimal 1 00:05:22.781 04:01:24 -- scripts/common.sh@352 -- # local d=1 00:05:22.781 04:01:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.781 04:01:24 -- scripts/common.sh@354 -- # echo 1 00:05:22.781 04:01:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.781 04:01:24 -- scripts/common.sh@365 -- # decimal 2 00:05:22.781 04:01:24 -- scripts/common.sh@352 -- # local d=2 00:05:22.781 04:01:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.781 04:01:24 -- scripts/common.sh@354 -- # echo 2 00:05:22.781 04:01:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.781 04:01:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.781 04:01:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.781 04:01:24 -- scripts/common.sh@367 -- # return 0 00:05:22.781 04:01:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.781 04:01:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.781 --rc genhtml_branch_coverage=1 00:05:22.781 --rc genhtml_function_coverage=1 00:05:22.781 --rc genhtml_legend=1 00:05:22.781 --rc geninfo_all_blocks=1 00:05:22.781 --rc geninfo_unexecuted_blocks=1 00:05:22.781 00:05:22.781 ' 00:05:22.781 04:01:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.781 --rc genhtml_branch_coverage=1 00:05:22.781 --rc genhtml_function_coverage=1 00:05:22.781 --rc genhtml_legend=1 00:05:22.781 --rc geninfo_all_blocks=1 00:05:22.781 --rc geninfo_unexecuted_blocks=1 00:05:22.781 00:05:22.781 ' 00:05:22.781 04:01:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.781 --rc genhtml_branch_coverage=1 00:05:22.781 --rc genhtml_function_coverage=1 00:05:22.781 --rc genhtml_legend=1 00:05:22.781 --rc geninfo_all_blocks=1 00:05:22.781 --rc geninfo_unexecuted_blocks=1 00:05:22.781 00:05:22.781 ' 00:05:22.781 04:01:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.781 --rc genhtml_branch_coverage=1 00:05:22.781 --rc genhtml_function_coverage=1 00:05:22.781 --rc genhtml_legend=1 00:05:22.781 --rc geninfo_all_blocks=1 00:05:22.781 --rc geninfo_unexecuted_blocks=1 00:05:22.781 00:05:22.781 ' 00:05:22.781 04:01:24 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.781 04:01:24 -- scheduler/scheduler.sh@35 -- # scheduler_pid=69315 00:05:22.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.781 04:01:24 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.781 04:01:24 -- scheduler/scheduler.sh@37 -- # waitforlisten 69315 00:05:22.781 04:01:24 -- common/autotest_common.sh@829 -- # '[' -z 69315 ']' 00:05:22.781 04:01:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.781 04:01:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.781 04:01:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.781 04:01:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.781 04:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.781 04:01:24 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.781 [2024-11-26 04:01:24.520687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.781 [2024-11-26 04:01:24.520805] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69315 ] 00:05:23.039 [2024-11-26 04:01:24.671133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.039 [2024-11-26 04:01:24.714708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.039 [2024-11-26 04:01:24.714932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.039 [2024-11-26 04:01:24.715098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.039 [2024-11-26 04:01:24.715098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.604 04:01:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.604 04:01:25 -- common/autotest_common.sh@862 -- # return 0 00:05:23.604 04:01:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.604 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.604 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.604 POWER: Env isn't set yet! 00:05:23.604 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:23.604 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.604 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.604 POWER: Attempting to initialise PSTAT power management... 00:05:23.604 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.604 POWER: Cannot set governor of lcore 0 to performance 00:05:23.604 POWER: Attempting to initialise AMD PSTATE power management... 00:05:23.604 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.604 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.604 POWER: Attempting to initialise CPPC power management... 00:05:23.604 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.604 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.604 POWER: Attempting to initialise VM power management... 00:05:23.604 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:23.604 POWER: Unable to set Power Management Environment for lcore 0 00:05:23.604 [2024-11-26 04:01:25.360301] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:23.604 [2024-11-26 04:01:25.360330] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:23.604 [2024-11-26 04:01:25.360339] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.604 [2024-11-26 04:01:25.360370] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.604 [2024-11-26 04:01:25.360378] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.604 [2024-11-26 04:01:25.360387] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.604 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.604 04:01:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.604 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.604 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 [2024-11-26 04:01:25.431002] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.863 04:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.863 04:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 ************************************ 00:05:23.863 START TEST scheduler_create_thread 00:05:23.863 ************************************ 00:05:23.863 04:01:25 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 2 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 3 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 4 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 5 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 6 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 7 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 8 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 9 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 10 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:23.863 04:01:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.863 04:01:25 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.863 04:01:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.863 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:05:24.430 04:01:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.430 04:01:26 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.430 04:01:26 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.430 04:01:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.430 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:05:25.805 ************************************ 00:05:25.805 END TEST scheduler_create_thread 00:05:25.805 ************************************ 00:05:25.805 04:01:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.805 00:05:25.805 real 0m1.752s 00:05:25.805 user 0m0.014s 00:05:25.805 sys 0m0.006s 00:05:25.805 04:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.805 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:25.805 04:01:27 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.805 04:01:27 -- scheduler/scheduler.sh@46 -- # killprocess 69315 00:05:25.805 04:01:27 -- common/autotest_common.sh@936 -- # '[' -z 69315 ']' 00:05:25.805 04:01:27 -- common/autotest_common.sh@940 -- # kill -0 69315 00:05:25.805 04:01:27 -- common/autotest_common.sh@941 -- # uname 00:05:25.805 04:01:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.805 04:01:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69315 00:05:25.805 killing process with pid 69315 00:05:25.805 04:01:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:25.805 04:01:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:25.805 04:01:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69315' 00:05:25.805 04:01:27 -- common/autotest_common.sh@955 -- # kill 69315 00:05:25.805 04:01:27 -- common/autotest_common.sh@960 -- # wait 69315 00:05:26.063 [2024-11-26 04:01:27.673469] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.321 00:05:26.321 real 0m3.533s 00:05:26.321 user 0m6.077s 00:05:26.321 sys 0m0.355s 00:05:26.321 ************************************ 00:05:26.321 END TEST event_scheduler 00:05:26.321 ************************************ 00:05:26.321 04:01:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.321 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.321 04:01:27 -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.321 04:01:27 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.321 04:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.321 04:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.321 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.321 ************************************ 00:05:26.321 START TEST app_repeat 00:05:26.321 ************************************ 00:05:26.321 04:01:27 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:26.321 04:01:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.321 04:01:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.321 04:01:27 -- event/event.sh@13 -- # local nbd_list 00:05:26.321 04:01:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.321 04:01:27 -- event/event.sh@14 -- # local bdev_list 00:05:26.321 04:01:27 -- event/event.sh@15 -- # local repeat_times=4 00:05:26.321 04:01:27 -- event/event.sh@17 -- # modprobe nbd 00:05:26.321 Process app_repeat pid: 69410 00:05:26.321 spdk_app_start Round 0 00:05:26.321 04:01:27 -- event/event.sh@19 -- # repeat_pid=69410 00:05:26.321 04:01:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.321 04:01:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69410' 00:05:26.321 04:01:27 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.321 04:01:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.321 04:01:27 -- event/event.sh@25 -- # waitforlisten 69410 /var/tmp/spdk-nbd.sock 00:05:26.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.322 04:01:27 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:05:26.322 04:01:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.322 04:01:27 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.322 04:01:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.322 04:01:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.322 04:01:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.322 04:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:26.322 [2024-11-26 04:01:27.932589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.322 [2024-11-26 04:01:27.932706] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69410 ] 00:05:26.322 [2024-11-26 04:01:28.081922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.580 [2024-11-26 04:01:28.122538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.580 [2024-11-26 04:01:28.122614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.147 04:01:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.147 04:01:28 -- common/autotest_common.sh@862 -- # return 0 00:05:27.147 04:01:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.409 Malloc0 00:05:27.409 04:01:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.670 Malloc1 00:05:27.670 04:01:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@12 -- # local i 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.670 /dev/nbd0 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.670 04:01:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:27.670 04:01:29 -- common/autotest_common.sh@867 -- # local i 00:05:27.670 04:01:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.670 04:01:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.670 04:01:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:27.670 04:01:29 -- common/autotest_common.sh@871 -- # break 00:05:27.670 04:01:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.670 04:01:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.670 04:01:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.670 1+0 records in 00:05:27.670 1+0 records out 00:05:27.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242633 s, 16.9 MB/s 00:05:27.670 04:01:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.670 04:01:29 -- common/autotest_common.sh@884 -- # size=4096 00:05:27.670 04:01:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.670 04:01:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.670 04:01:29 -- common/autotest_common.sh@887 -- # return 0 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.670 04:01:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.931 /dev/nbd1 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.931 04:01:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:27.931 04:01:29 -- common/autotest_common.sh@867 -- # local i 00:05:27.931 04:01:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.931 04:01:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.931 04:01:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:27.931 04:01:29 -- common/autotest_common.sh@871 -- # break 00:05:27.931 04:01:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.931 04:01:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.931 04:01:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.931 1+0 records in 00:05:27.931 1+0 records out 00:05:27.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254064 s, 16.1 MB/s 00:05:27.931 04:01:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.931 04:01:29 -- common/autotest_common.sh@884 -- # size=4096 00:05:27.931 04:01:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.931 04:01:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.931 04:01:29 -- common/autotest_common.sh@887 -- # return 0 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.931 04:01:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.190 { 00:05:28.190 "nbd_device": "/dev/nbd0", 00:05:28.190 "bdev_name": "Malloc0" 00:05:28.190 }, 00:05:28.190 { 00:05:28.190 "nbd_device": "/dev/nbd1", 00:05:28.190 "bdev_name": "Malloc1" 00:05:28.190 } 00:05:28.190 ]' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.190 { 00:05:28.190 "nbd_device": "/dev/nbd0", 00:05:28.190 "bdev_name": "Malloc0" 00:05:28.190 }, 00:05:28.190 { 00:05:28.190 "nbd_device": "/dev/nbd1", 00:05:28.190 "bdev_name": "Malloc1" 00:05:28.190 } 00:05:28.190 ]' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.190 /dev/nbd1' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.190 /dev/nbd1' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.190 04:01:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.190 256+0 records in 00:05:28.190 256+0 records out 00:05:28.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782921 s, 134 MB/s 00:05:28.191 04:01:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.191 04:01:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.191 256+0 records in 00:05:28.191 256+0 records out 00:05:28.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216322 s, 48.5 MB/s 00:05:28.191 04:01:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.191 04:01:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.450 256+0 records in 00:05:28.450 256+0 records out 00:05:28.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203807 s, 51.4 MB/s 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@51 -- # local i 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.450 04:01:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@41 -- # break 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.450 04:01:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@41 -- # break 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.710 04:01:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@65 -- # true 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.971 04:01:30 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.971 04:01:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.232 04:01:30 -- event/event.sh@35 -- # sleep 3 00:05:29.232 [2024-11-26 04:01:30.921614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.232 [2024-11-26 04:01:30.962677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.232 [2024-11-26 04:01:30.962780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.493 [2024-11-26 04:01:31.008236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.493 [2024-11-26 04:01:31.008300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.040 spdk_app_start Round 1 00:05:32.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.040 04:01:33 -- event/event.sh@23 -- # for i in {0..2} 00:05:32.040 04:01:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.040 04:01:33 -- event/event.sh@25 -- # waitforlisten 69410 /var/tmp/spdk-nbd.sock 00:05:32.040 04:01:33 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:05:32.040 04:01:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.040 04:01:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.040 04:01:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.040 04:01:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.040 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:05:32.302 04:01:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.302 04:01:33 -- common/autotest_common.sh@862 -- # return 0 00:05:32.302 04:01:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.561 Malloc0 00:05:32.561 04:01:34 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.819 Malloc1 00:05:32.819 04:01:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@12 -- # local i 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.819 /dev/nbd0 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.819 04:01:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.819 04:01:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:32.819 04:01:34 -- common/autotest_common.sh@867 -- # local i 00:05:32.819 04:01:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.819 04:01:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.820 04:01:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:32.820 04:01:34 -- common/autotest_common.sh@871 -- # break 00:05:32.820 04:01:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.820 04:01:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.820 04:01:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.820 1+0 records in 00:05:32.820 1+0 records out 00:05:32.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535655 s, 7.6 MB/s 00:05:33.078 04:01:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.078 04:01:34 -- common/autotest_common.sh@884 -- # size=4096 00:05:33.078 04:01:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.078 04:01:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.078 04:01:34 -- common/autotest_common.sh@887 -- # return 0 00:05:33.078 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.078 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.079 /dev/nbd1 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.079 04:01:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:33.079 04:01:34 -- common/autotest_common.sh@867 -- # local i 00:05:33.079 04:01:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:33.079 04:01:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:33.079 04:01:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:33.079 04:01:34 -- common/autotest_common.sh@871 -- # break 00:05:33.079 04:01:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:33.079 04:01:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:33.079 04:01:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.079 1+0 records in 00:05:33.079 1+0 records out 00:05:33.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255772 s, 16.0 MB/s 00:05:33.079 04:01:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.079 04:01:34 -- common/autotest_common.sh@884 -- # size=4096 00:05:33.079 04:01:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.079 04:01:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.079 04:01:34 -- common/autotest_common.sh@887 -- # return 0 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.079 04:01:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.337 { 00:05:33.337 "nbd_device": "/dev/nbd0", 00:05:33.337 "bdev_name": "Malloc0" 00:05:33.337 }, 00:05:33.337 { 00:05:33.337 "nbd_device": "/dev/nbd1", 00:05:33.337 "bdev_name": "Malloc1" 00:05:33.337 } 00:05:33.337 ]' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.337 { 00:05:33.337 "nbd_device": "/dev/nbd0", 00:05:33.337 "bdev_name": "Malloc0" 00:05:33.337 }, 00:05:33.337 { 00:05:33.337 "nbd_device": "/dev/nbd1", 00:05:33.337 "bdev_name": "Malloc1" 00:05:33.337 } 00:05:33.337 ]' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.337 /dev/nbd1' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.337 /dev/nbd1' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.337 256+0 records in 00:05:33.337 256+0 records out 00:05:33.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636886 s, 165 MB/s 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.337 256+0 records in 00:05:33.337 256+0 records out 00:05:33.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164932 s, 63.6 MB/s 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.337 04:01:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.599 256+0 records in 00:05:33.599 256+0 records out 00:05:33.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170549 s, 61.5 MB/s 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@51 -- # local i 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@41 -- # break 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.599 04:01:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@41 -- # break 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.858 04:01:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@65 -- # true 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.117 04:01:35 -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.117 04:01:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.376 04:01:35 -- event/event.sh@35 -- # sleep 3 00:05:34.376 [2024-11-26 04:01:36.070665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.376 [2024-11-26 04:01:36.104008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.376 [2024-11-26 04:01:36.104098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.634 [2024-11-26 04:01:36.142558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.634 [2024-11-26 04:01:36.142775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.925 spdk_app_start Round 2 00:05:37.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.925 04:01:38 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.925 04:01:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.925 04:01:38 -- event/event.sh@25 -- # waitforlisten 69410 /var/tmp/spdk-nbd.sock 00:05:37.925 04:01:38 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:05:37.925 04:01:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.925 04:01:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.925 04:01:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.925 04:01:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.925 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.925 04:01:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.925 04:01:39 -- common/autotest_common.sh@862 -- # return 0 00:05:37.925 04:01:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.925 Malloc0 00:05:37.925 04:01:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.925 Malloc1 00:05:37.925 04:01:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@12 -- # local i 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.925 04:01:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.183 /dev/nbd0 00:05:38.183 04:01:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.183 04:01:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.183 04:01:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:38.183 04:01:39 -- common/autotest_common.sh@867 -- # local i 00:05:38.183 04:01:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.183 04:01:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.183 04:01:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:38.183 04:01:39 -- common/autotest_common.sh@871 -- # break 00:05:38.183 04:01:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.183 04:01:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.183 04:01:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.183 1+0 records in 00:05:38.183 1+0 records out 00:05:38.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561755 s, 7.3 MB/s 00:05:38.183 04:01:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.183 04:01:39 -- common/autotest_common.sh@884 -- # size=4096 00:05:38.183 04:01:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.183 04:01:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.183 04:01:39 -- common/autotest_common.sh@887 -- # return 0 00:05:38.183 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.183 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.183 04:01:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.183 /dev/nbd1 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.441 04:01:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.441 04:01:39 -- common/autotest_common.sh@867 -- # local i 00:05:38.441 04:01:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.441 04:01:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.441 04:01:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.441 04:01:39 -- common/autotest_common.sh@871 -- # break 00:05:38.441 04:01:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.441 04:01:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.441 04:01:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.441 1+0 records in 00:05:38.441 1+0 records out 00:05:38.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188062 s, 21.8 MB/s 00:05:38.441 04:01:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.441 04:01:39 -- common/autotest_common.sh@884 -- # size=4096 00:05:38.441 04:01:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.441 04:01:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.441 04:01:39 -- common/autotest_common.sh@887 -- # return 0 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.441 04:01:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.441 { 00:05:38.441 "nbd_device": "/dev/nbd0", 00:05:38.441 "bdev_name": "Malloc0" 00:05:38.441 }, 00:05:38.441 { 00:05:38.441 "nbd_device": "/dev/nbd1", 00:05:38.441 "bdev_name": "Malloc1" 00:05:38.441 } 00:05:38.441 ]' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.441 { 00:05:38.441 "nbd_device": "/dev/nbd0", 00:05:38.441 "bdev_name": "Malloc0" 00:05:38.441 }, 00:05:38.441 { 00:05:38.441 "nbd_device": "/dev/nbd1", 00:05:38.441 "bdev_name": "Malloc1" 00:05:38.441 } 00:05:38.441 ]' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.441 /dev/nbd1' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.441 /dev/nbd1' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.441 04:01:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.442 04:01:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.442 04:01:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.701 256+0 records in 00:05:38.701 256+0 records out 00:05:38.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418338 s, 251 MB/s 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.701 256+0 records in 00:05:38.701 256+0 records out 00:05:38.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198443 s, 52.8 MB/s 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.701 256+0 records in 00:05:38.701 256+0 records out 00:05:38.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205987 s, 50.9 MB/s 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@51 -- # local i 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.701 04:01:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@41 -- # break 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@41 -- # break 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.959 04:01:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@65 -- # true 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.218 04:01:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.218 04:01:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.478 04:01:41 -- event/event.sh@35 -- # sleep 3 00:05:39.478 [2024-11-26 04:01:41.191881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.478 [2024-11-26 04:01:41.220750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.478 [2024-11-26 04:01:41.220839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.739 [2024-11-26 04:01:41.251717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.739 [2024-11-26 04:01:41.251898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.020 04:01:44 -- event/event.sh@38 -- # waitforlisten 69410 /var/tmp/spdk-nbd.sock 00:05:43.020 04:01:44 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:05:43.020 04:01:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.020 04:01:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.020 04:01:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.020 04:01:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.020 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.020 04:01:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.020 04:01:44 -- common/autotest_common.sh@862 -- # return 0 00:05:43.020 04:01:44 -- event/event.sh@39 -- # killprocess 69410 00:05:43.020 04:01:44 -- common/autotest_common.sh@936 -- # '[' -z 69410 ']' 00:05:43.020 04:01:44 -- common/autotest_common.sh@940 -- # kill -0 69410 00:05:43.020 04:01:44 -- common/autotest_common.sh@941 -- # uname 00:05:43.020 04:01:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.020 04:01:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69410 00:05:43.020 killing process with pid 69410 00:05:43.020 04:01:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.020 04:01:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.020 04:01:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69410' 00:05:43.020 04:01:44 -- common/autotest_common.sh@955 -- # kill 69410 00:05:43.020 04:01:44 -- common/autotest_common.sh@960 -- # wait 69410 00:05:43.020 spdk_app_start is called in Round 0. 00:05:43.020 Shutdown signal received, stop current app iteration 00:05:43.020 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:43.020 spdk_app_start is called in Round 1. 00:05:43.020 Shutdown signal received, stop current app iteration 00:05:43.020 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:43.020 spdk_app_start is called in Round 2. 00:05:43.020 Shutdown signal received, stop current app iteration 00:05:43.020 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:43.020 spdk_app_start is called in Round 3. 00:05:43.020 Shutdown signal received, stop current app iteration 00:05:43.020 ************************************ 00:05:43.020 END TEST app_repeat 00:05:43.020 ************************************ 00:05:43.020 04:01:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.020 04:01:44 -- event/event.sh@42 -- # return 0 00:05:43.020 00:05:43.020 real 0m16.556s 00:05:43.020 user 0m36.534s 00:05:43.020 sys 0m2.136s 00:05:43.020 04:01:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.020 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.020 04:01:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.020 04:01:44 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.020 04:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.020 04:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.020 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.020 ************************************ 00:05:43.020 START TEST cpu_locks 00:05:43.020 ************************************ 00:05:43.020 04:01:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.020 * Looking for test storage... 00:05:43.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.020 04:01:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.020 04:01:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.020 04:01:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.020 04:01:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.020 04:01:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.020 04:01:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.020 04:01:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.020 04:01:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.020 04:01:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.021 04:01:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.021 04:01:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.021 04:01:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.021 04:01:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.021 04:01:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.021 04:01:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.021 04:01:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.021 04:01:44 -- scripts/common.sh@344 -- # : 1 00:05:43.021 04:01:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.021 04:01:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.021 04:01:44 -- scripts/common.sh@364 -- # decimal 1 00:05:43.021 04:01:44 -- scripts/common.sh@352 -- # local d=1 00:05:43.021 04:01:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.021 04:01:44 -- scripts/common.sh@354 -- # echo 1 00:05:43.021 04:01:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.021 04:01:44 -- scripts/common.sh@365 -- # decimal 2 00:05:43.021 04:01:44 -- scripts/common.sh@352 -- # local d=2 00:05:43.021 04:01:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.021 04:01:44 -- scripts/common.sh@354 -- # echo 2 00:05:43.021 04:01:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.021 04:01:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.021 04:01:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.021 04:01:44 -- scripts/common.sh@367 -- # return 0 00:05:43.021 04:01:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.021 04:01:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.021 --rc genhtml_branch_coverage=1 00:05:43.021 --rc genhtml_function_coverage=1 00:05:43.021 --rc genhtml_legend=1 00:05:43.021 --rc geninfo_all_blocks=1 00:05:43.021 --rc geninfo_unexecuted_blocks=1 00:05:43.021 00:05:43.021 ' 00:05:43.021 04:01:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.021 --rc genhtml_branch_coverage=1 00:05:43.021 --rc genhtml_function_coverage=1 00:05:43.021 --rc genhtml_legend=1 00:05:43.021 --rc geninfo_all_blocks=1 00:05:43.021 --rc geninfo_unexecuted_blocks=1 00:05:43.021 00:05:43.021 ' 00:05:43.021 04:01:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.021 --rc genhtml_branch_coverage=1 00:05:43.021 --rc genhtml_function_coverage=1 00:05:43.021 --rc genhtml_legend=1 00:05:43.021 --rc geninfo_all_blocks=1 00:05:43.021 --rc geninfo_unexecuted_blocks=1 00:05:43.021 00:05:43.021 ' 00:05:43.021 04:01:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.021 --rc genhtml_branch_coverage=1 00:05:43.021 --rc genhtml_function_coverage=1 00:05:43.021 --rc genhtml_legend=1 00:05:43.021 --rc geninfo_all_blocks=1 00:05:43.021 --rc geninfo_unexecuted_blocks=1 00:05:43.021 00:05:43.021 ' 00:05:43.021 04:01:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.021 04:01:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.021 04:01:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.021 04:01:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.021 04:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.021 04:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.021 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.021 ************************************ 00:05:43.021 START TEST default_locks 00:05:43.021 ************************************ 00:05:43.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.021 04:01:44 -- common/autotest_common.sh@1114 -- # default_locks 00:05:43.021 04:01:44 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69823 00:05:43.021 04:01:44 -- event/cpu_locks.sh@47 -- # waitforlisten 69823 00:05:43.021 04:01:44 -- common/autotest_common.sh@829 -- # '[' -z 69823 ']' 00:05:43.021 04:01:44 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.021 04:01:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.021 04:01:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.021 04:01:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.021 04:01:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.021 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.021 [2024-11-26 04:01:44.712410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.021 [2024-11-26 04:01:44.712538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69823 ] 00:05:43.279 [2024-11-26 04:01:44.857945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.279 [2024-11-26 04:01:44.887872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.279 [2024-11-26 04:01:44.888040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.845 04:01:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.845 04:01:45 -- common/autotest_common.sh@862 -- # return 0 00:05:43.845 04:01:45 -- event/cpu_locks.sh@49 -- # locks_exist 69823 00:05:43.845 04:01:45 -- event/cpu_locks.sh@22 -- # lslocks -p 69823 00:05:43.845 04:01:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.102 04:01:45 -- event/cpu_locks.sh@50 -- # killprocess 69823 00:05:44.102 04:01:45 -- common/autotest_common.sh@936 -- # '[' -z 69823 ']' 00:05:44.103 04:01:45 -- common/autotest_common.sh@940 -- # kill -0 69823 00:05:44.103 04:01:45 -- common/autotest_common.sh@941 -- # uname 00:05:44.103 04:01:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.103 04:01:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69823 00:05:44.103 killing process with pid 69823 00:05:44.103 04:01:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.103 04:01:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.103 04:01:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69823' 00:05:44.103 04:01:45 -- common/autotest_common.sh@955 -- # kill 69823 00:05:44.103 04:01:45 -- common/autotest_common.sh@960 -- # wait 69823 00:05:44.361 04:01:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69823 00:05:44.361 04:01:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:44.361 04:01:45 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69823 00:05:44.361 04:01:45 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.361 04:01:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.361 04:01:45 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.361 04:01:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.361 04:01:45 -- common/autotest_common.sh@653 -- # waitforlisten 69823 00:05:44.361 04:01:45 -- common/autotest_common.sh@829 -- # '[' -z 69823 ']' 00:05:44.361 04:01:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.361 04:01:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.361 04:01:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.361 04:01:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.361 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.361 ERROR: process (pid: 69823) is no longer running 00:05:44.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69823) - No such process 00:05:44.361 04:01:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.361 04:01:45 -- common/autotest_common.sh@862 -- # return 1 00:05:44.361 04:01:45 -- common/autotest_common.sh@653 -- # es=1 00:05:44.361 04:01:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.361 04:01:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.361 04:01:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.361 04:01:45 -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.361 04:01:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.361 04:01:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.361 04:01:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.361 00:05:44.361 real 0m1.264s 00:05:44.361 user 0m1.250s 00:05:44.361 sys 0m0.383s 00:05:44.361 04:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.361 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.361 ************************************ 00:05:44.361 END TEST default_locks 00:05:44.361 ************************************ 00:05:44.361 04:01:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.361 04:01:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.361 04:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.361 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.361 ************************************ 00:05:44.361 START TEST default_locks_via_rpc 00:05:44.361 ************************************ 00:05:44.361 04:01:45 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:44.361 04:01:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69865 00:05:44.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.361 04:01:45 -- event/cpu_locks.sh@63 -- # waitforlisten 69865 00:05:44.361 04:01:45 -- common/autotest_common.sh@829 -- # '[' -z 69865 ']' 00:05:44.361 04:01:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.361 04:01:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.361 04:01:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.361 04:01:45 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.361 04:01:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.361 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.361 [2024-11-26 04:01:46.037053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.361 [2024-11-26 04:01:46.037162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:05:44.620 [2024-11-26 04:01:46.182031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.620 [2024-11-26 04:01:46.212586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.620 [2024-11-26 04:01:46.212750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.185 04:01:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.185 04:01:46 -- common/autotest_common.sh@862 -- # return 0 00:05:45.185 04:01:46 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.185 04:01:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.185 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:45.185 04:01:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.185 04:01:46 -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.185 04:01:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.185 04:01:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.185 04:01:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.185 04:01:46 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.185 04:01:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.185 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:45.185 04:01:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.185 04:01:46 -- event/cpu_locks.sh@71 -- # locks_exist 69865 00:05:45.185 04:01:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.185 04:01:46 -- event/cpu_locks.sh@22 -- # lslocks -p 69865 00:05:45.443 04:01:47 -- event/cpu_locks.sh@73 -- # killprocess 69865 00:05:45.443 04:01:47 -- common/autotest_common.sh@936 -- # '[' -z 69865 ']' 00:05:45.443 04:01:47 -- common/autotest_common.sh@940 -- # kill -0 69865 00:05:45.443 04:01:47 -- common/autotest_common.sh@941 -- # uname 00:05:45.443 04:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.443 04:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69865 00:05:45.443 killing process with pid 69865 00:05:45.443 04:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.443 04:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.443 04:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69865' 00:05:45.443 04:01:47 -- common/autotest_common.sh@955 -- # kill 69865 00:05:45.443 04:01:47 -- common/autotest_common.sh@960 -- # wait 69865 00:05:45.701 00:05:45.701 real 0m1.383s 00:05:45.701 user 0m1.417s 00:05:45.701 sys 0m0.402s 00:05:45.701 04:01:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.701 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.701 ************************************ 00:05:45.701 END TEST default_locks_via_rpc 00:05:45.701 ************************************ 00:05:45.701 04:01:47 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.701 04:01:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.701 04:01:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.701 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.701 ************************************ 00:05:45.701 START TEST non_locking_app_on_locked_coremask 00:05:45.701 ************************************ 00:05:45.701 04:01:47 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:45.701 04:01:47 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69917 00:05:45.701 04:01:47 -- event/cpu_locks.sh@81 -- # waitforlisten 69917 /var/tmp/spdk.sock 00:05:45.701 04:01:47 -- common/autotest_common.sh@829 -- # '[' -z 69917 ']' 00:05:45.701 04:01:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.701 04:01:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.701 04:01:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.701 04:01:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.701 04:01:47 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.701 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:05:45.958 [2024-11-26 04:01:47.467808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.958 [2024-11-26 04:01:47.467993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69917 ] 00:05:45.959 [2024-11-26 04:01:47.632162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.959 [2024-11-26 04:01:47.662437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.959 [2024-11-26 04:01:47.662630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.891 04:01:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.891 04:01:48 -- common/autotest_common.sh@862 -- # return 0 00:05:46.891 04:01:48 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.891 04:01:48 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69928 00:05:46.891 04:01:48 -- event/cpu_locks.sh@85 -- # waitforlisten 69928 /var/tmp/spdk2.sock 00:05:46.891 04:01:48 -- common/autotest_common.sh@829 -- # '[' -z 69928 ']' 00:05:46.891 04:01:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.891 04:01:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.891 04:01:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.891 04:01:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.891 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:46.891 [2024-11-26 04:01:48.405539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.891 [2024-11-26 04:01:48.405671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69928 ] 00:05:46.891 [2024-11-26 04:01:48.554054] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.891 [2024-11-26 04:01:48.554102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.891 [2024-11-26 04:01:48.618464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.891 [2024-11-26 04:01:48.618632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.825 04:01:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.825 04:01:49 -- common/autotest_common.sh@862 -- # return 0 00:05:47.825 04:01:49 -- event/cpu_locks.sh@87 -- # locks_exist 69917 00:05:47.825 04:01:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.825 04:01:49 -- event/cpu_locks.sh@22 -- # lslocks -p 69917 00:05:47.825 04:01:49 -- event/cpu_locks.sh@89 -- # killprocess 69917 00:05:47.825 04:01:49 -- common/autotest_common.sh@936 -- # '[' -z 69917 ']' 00:05:47.825 04:01:49 -- common/autotest_common.sh@940 -- # kill -0 69917 00:05:47.825 04:01:49 -- common/autotest_common.sh@941 -- # uname 00:05:47.825 04:01:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.825 04:01:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69917 00:05:48.084 04:01:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.084 killing process with pid 69917 00:05:48.084 04:01:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.084 04:01:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69917' 00:05:48.084 04:01:49 -- common/autotest_common.sh@955 -- # kill 69917 00:05:48.084 04:01:49 -- common/autotest_common.sh@960 -- # wait 69917 00:05:48.342 04:01:50 -- event/cpu_locks.sh@90 -- # killprocess 69928 00:05:48.342 04:01:50 -- common/autotest_common.sh@936 -- # '[' -z 69928 ']' 00:05:48.342 04:01:50 -- common/autotest_common.sh@940 -- # kill -0 69928 00:05:48.342 04:01:50 -- common/autotest_common.sh@941 -- # uname 00:05:48.342 04:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.342 04:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69928 00:05:48.342 04:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.342 killing process with pid 69928 00:05:48.342 04:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.342 04:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69928' 00:05:48.342 04:01:50 -- common/autotest_common.sh@955 -- # kill 69928 00:05:48.342 04:01:50 -- common/autotest_common.sh@960 -- # wait 69928 00:05:48.601 00:05:48.601 real 0m2.877s 00:05:48.601 user 0m3.222s 00:05:48.601 sys 0m0.763s 00:05:48.601 04:01:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.601 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.601 ************************************ 00:05:48.601 END TEST non_locking_app_on_locked_coremask 00:05:48.601 ************************************ 00:05:48.601 04:01:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:48.601 04:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.601 04:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.601 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.601 ************************************ 00:05:48.601 START TEST locking_app_on_unlocked_coremask 00:05:48.601 ************************************ 00:05:48.601 04:01:50 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:48.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.601 04:01:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69986 00:05:48.601 04:01:50 -- event/cpu_locks.sh@99 -- # waitforlisten 69986 /var/tmp/spdk.sock 00:05:48.601 04:01:50 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:48.601 04:01:50 -- common/autotest_common.sh@829 -- # '[' -z 69986 ']' 00:05:48.601 04:01:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.601 04:01:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.601 04:01:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.601 04:01:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.601 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:48.601 [2024-11-26 04:01:50.361207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.601 [2024-11-26 04:01:50.361306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69986 ] 00:05:48.860 [2024-11-26 04:01:50.502106] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.860 [2024-11-26 04:01:50.502155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.860 [2024-11-26 04:01:50.531438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.860 [2024-11-26 04:01:50.531624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.797 04:01:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.797 04:01:51 -- common/autotest_common.sh@862 -- # return 0 00:05:49.797 04:01:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69999 00:05:49.797 04:01:51 -- event/cpu_locks.sh@103 -- # waitforlisten 69999 /var/tmp/spdk2.sock 00:05:49.797 04:01:51 -- common/autotest_common.sh@829 -- # '[' -z 69999 ']' 00:05:49.797 04:01:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.797 04:01:51 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.797 04:01:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.797 04:01:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.797 04:01:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.797 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:49.797 [2024-11-26 04:01:51.292566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.797 [2024-11-26 04:01:51.292890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69999 ] 00:05:49.797 [2024-11-26 04:01:51.440773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.797 [2024-11-26 04:01:51.497271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.797 [2024-11-26 04:01:51.497436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.365 04:01:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.365 04:01:52 -- common/autotest_common.sh@862 -- # return 0 00:05:50.365 04:01:52 -- event/cpu_locks.sh@105 -- # locks_exist 69999 00:05:50.365 04:01:52 -- event/cpu_locks.sh@22 -- # lslocks -p 69999 00:05:50.365 04:01:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.624 04:01:52 -- event/cpu_locks.sh@107 -- # killprocess 69986 00:05:50.624 04:01:52 -- common/autotest_common.sh@936 -- # '[' -z 69986 ']' 00:05:50.624 04:01:52 -- common/autotest_common.sh@940 -- # kill -0 69986 00:05:50.624 04:01:52 -- common/autotest_common.sh@941 -- # uname 00:05:50.624 04:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.624 04:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69986 00:05:50.883 killing process with pid 69986 00:05:50.883 04:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.883 04:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.883 04:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69986' 00:05:50.883 04:01:52 -- common/autotest_common.sh@955 -- # kill 69986 00:05:50.883 04:01:52 -- common/autotest_common.sh@960 -- # wait 69986 00:05:51.141 04:01:52 -- event/cpu_locks.sh@108 -- # killprocess 69999 00:05:51.141 04:01:52 -- common/autotest_common.sh@936 -- # '[' -z 69999 ']' 00:05:51.141 04:01:52 -- common/autotest_common.sh@940 -- # kill -0 69999 00:05:51.141 04:01:52 -- common/autotest_common.sh@941 -- # uname 00:05:51.141 04:01:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.141 04:01:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69999 00:05:51.141 killing process with pid 69999 00:05:51.141 04:01:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.141 04:01:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.141 04:01:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69999' 00:05:51.141 04:01:52 -- common/autotest_common.sh@955 -- # kill 69999 00:05:51.141 04:01:52 -- common/autotest_common.sh@960 -- # wait 69999 00:05:51.400 00:05:51.400 real 0m2.768s 00:05:51.400 user 0m3.103s 00:05:51.400 sys 0m0.718s 00:05:51.400 ************************************ 00:05:51.400 END TEST locking_app_on_unlocked_coremask 00:05:51.400 04:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.400 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:51.400 ************************************ 00:05:51.400 04:01:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.400 04:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.401 04:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.401 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:51.401 ************************************ 00:05:51.401 START TEST locking_app_on_locked_coremask 00:05:51.401 ************************************ 00:05:51.401 04:01:53 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:51.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.401 04:01:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70054 00:05:51.401 04:01:53 -- event/cpu_locks.sh@116 -- # waitforlisten 70054 /var/tmp/spdk.sock 00:05:51.401 04:01:53 -- common/autotest_common.sh@829 -- # '[' -z 70054 ']' 00:05:51.401 04:01:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.401 04:01:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.401 04:01:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.401 04:01:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.401 04:01:53 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.401 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:51.660 [2024-11-26 04:01:53.173287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.660 [2024-11-26 04:01:53.173430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70054 ] 00:05:51.660 [2024-11-26 04:01:53.318166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.660 [2024-11-26 04:01:53.346401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.660 [2024-11-26 04:01:53.346601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.633 04:01:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.633 04:01:53 -- common/autotest_common.sh@862 -- # return 0 00:05:52.633 04:01:53 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70070 00:05:52.633 04:01:53 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70070 /var/tmp/spdk2.sock 00:05:52.633 04:01:53 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.633 04:01:53 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.633 04:01:53 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70070 /var/tmp/spdk2.sock 00:05:52.633 04:01:53 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.633 04:01:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.633 04:01:53 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.633 04:01:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.633 04:01:53 -- common/autotest_common.sh@653 -- # waitforlisten 70070 /var/tmp/spdk2.sock 00:05:52.633 04:01:53 -- common/autotest_common.sh@829 -- # '[' -z 70070 ']' 00:05:52.633 04:01:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.633 04:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.633 04:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.633 04:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.633 04:01:54 -- common/autotest_common.sh@10 -- # set +x 00:05:52.633 [2024-11-26 04:01:54.062032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.633 [2024-11-26 04:01:54.062356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70070 ] 00:05:52.633 [2024-11-26 04:01:54.209478] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70054 has claimed it. 00:05:52.633 [2024-11-26 04:01:54.209548] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.201 ERROR: process (pid: 70070) is no longer running 00:05:53.201 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70070) - No such process 00:05:53.201 04:01:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.201 04:01:54 -- common/autotest_common.sh@862 -- # return 1 00:05:53.201 04:01:54 -- common/autotest_common.sh@653 -- # es=1 00:05:53.201 04:01:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.201 04:01:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.201 04:01:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.201 04:01:54 -- event/cpu_locks.sh@122 -- # locks_exist 70054 00:05:53.201 04:01:54 -- event/cpu_locks.sh@22 -- # lslocks -p 70054 00:05:53.201 04:01:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.201 04:01:54 -- event/cpu_locks.sh@124 -- # killprocess 70054 00:05:53.201 04:01:54 -- common/autotest_common.sh@936 -- # '[' -z 70054 ']' 00:05:53.201 04:01:54 -- common/autotest_common.sh@940 -- # kill -0 70054 00:05:53.201 04:01:54 -- common/autotest_common.sh@941 -- # uname 00:05:53.201 04:01:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.201 04:01:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70054 00:05:53.201 04:01:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.201 killing process with pid 70054 00:05:53.201 04:01:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.201 04:01:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70054' 00:05:53.201 04:01:54 -- common/autotest_common.sh@955 -- # kill 70054 00:05:53.201 04:01:54 -- common/autotest_common.sh@960 -- # wait 70054 00:05:53.459 ************************************ 00:05:53.459 END TEST locking_app_on_locked_coremask 00:05:53.459 ************************************ 00:05:53.459 00:05:53.459 real 0m1.962s 00:05:53.459 user 0m2.205s 00:05:53.459 sys 0m0.457s 00:05:53.459 04:01:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.459 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:53.459 04:01:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:53.459 04:01:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.459 04:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.459 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:53.459 ************************************ 00:05:53.459 START TEST locking_overlapped_coremask 00:05:53.459 ************************************ 00:05:53.459 04:01:55 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:53.459 04:01:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70112 00:05:53.459 04:01:55 -- event/cpu_locks.sh@133 -- # waitforlisten 70112 /var/tmp/spdk.sock 00:05:53.459 04:01:55 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:53.459 04:01:55 -- common/autotest_common.sh@829 -- # '[' -z 70112 ']' 00:05:53.459 04:01:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.459 04:01:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.459 04:01:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.459 04:01:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.459 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:53.459 [2024-11-26 04:01:55.178435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.459 [2024-11-26 04:01:55.178560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70112 ] 00:05:53.717 [2024-11-26 04:01:55.324231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.717 [2024-11-26 04:01:55.354392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.717 [2024-11-26 04:01:55.354724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.717 [2024-11-26 04:01:55.355038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.717 [2024-11-26 04:01:55.355108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.283 04:01:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.283 04:01:55 -- common/autotest_common.sh@862 -- # return 0 00:05:54.283 04:01:55 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70130 00:05:54.283 04:01:55 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70130 /var/tmp/spdk2.sock 00:05:54.283 04:01:55 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.283 04:01:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:54.283 04:01:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70130 /var/tmp/spdk2.sock 00:05:54.283 04:01:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.283 04:01:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.283 04:01:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.283 04:01:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.283 04:01:55 -- common/autotest_common.sh@653 -- # waitforlisten 70130 /var/tmp/spdk2.sock 00:05:54.283 04:01:55 -- common/autotest_common.sh@829 -- # '[' -z 70130 ']' 00:05:54.283 04:01:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.283 04:01:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.283 04:01:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.283 04:01:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.283 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.540 [2024-11-26 04:01:56.059995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.540 [2024-11-26 04:01:56.060111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70130 ] 00:05:54.540 [2024-11-26 04:01:56.214143] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70112 has claimed it. 00:05:54.540 [2024-11-26 04:01:56.214204] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.105 ERROR: process (pid: 70130) is no longer running 00:05:55.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70130) - No such process 00:05:55.105 04:01:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.105 04:01:56 -- common/autotest_common.sh@862 -- # return 1 00:05:55.105 04:01:56 -- common/autotest_common.sh@653 -- # es=1 00:05:55.105 04:01:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.105 04:01:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.105 04:01:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.105 04:01:56 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.105 04:01:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.105 04:01:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.105 04:01:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.105 04:01:56 -- event/cpu_locks.sh@141 -- # killprocess 70112 00:05:55.105 04:01:56 -- common/autotest_common.sh@936 -- # '[' -z 70112 ']' 00:05:55.105 04:01:56 -- common/autotest_common.sh@940 -- # kill -0 70112 00:05:55.105 04:01:56 -- common/autotest_common.sh@941 -- # uname 00:05:55.105 04:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.105 04:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70112 00:05:55.105 04:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.105 04:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.105 killing process with pid 70112 00:05:55.105 04:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70112' 00:05:55.105 04:01:56 -- common/autotest_common.sh@955 -- # kill 70112 00:05:55.105 04:01:56 -- common/autotest_common.sh@960 -- # wait 70112 00:05:55.363 00:05:55.363 real 0m1.797s 00:05:55.363 user 0m4.906s 00:05:55.363 sys 0m0.374s 00:05:55.363 04:01:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.363 ************************************ 00:05:55.363 END TEST locking_overlapped_coremask 00:05:55.363 ************************************ 00:05:55.363 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.363 04:01:56 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.363 04:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.363 04:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.363 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.363 ************************************ 00:05:55.363 START TEST locking_overlapped_coremask_via_rpc 00:05:55.363 ************************************ 00:05:55.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.363 04:01:56 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:55.363 04:01:56 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70172 00:05:55.363 04:01:56 -- event/cpu_locks.sh@149 -- # waitforlisten 70172 /var/tmp/spdk.sock 00:05:55.363 04:01:56 -- common/autotest_common.sh@829 -- # '[' -z 70172 ']' 00:05:55.363 04:01:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.363 04:01:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.363 04:01:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.363 04:01:56 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:55.363 04:01:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.363 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:05:55.363 [2024-11-26 04:01:57.019157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.363 [2024-11-26 04:01:57.019273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70172 ] 00:05:55.621 [2024-11-26 04:01:57.161825] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.621 [2024-11-26 04:01:57.162045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.621 [2024-11-26 04:01:57.191990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.621 [2024-11-26 04:01:57.192414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.621 [2024-11-26 04:01:57.192496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.621 [2024-11-26 04:01:57.192594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.186 04:01:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.186 04:01:57 -- common/autotest_common.sh@862 -- # return 0 00:05:56.186 04:01:57 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.186 04:01:57 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70190 00:05:56.186 04:01:57 -- event/cpu_locks.sh@153 -- # waitforlisten 70190 /var/tmp/spdk2.sock 00:05:56.186 04:01:57 -- common/autotest_common.sh@829 -- # '[' -z 70190 ']' 00:05:56.186 04:01:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.186 04:01:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.186 04:01:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.186 04:01:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.186 04:01:57 -- common/autotest_common.sh@10 -- # set +x 00:05:56.186 [2024-11-26 04:01:57.804421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.186 [2024-11-26 04:01:57.804691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70190 ] 00:05:56.444 [2024-11-26 04:01:57.956234] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.444 [2024-11-26 04:01:57.956294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.444 [2024-11-26 04:01:58.020288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.444 [2024-11-26 04:01:58.020651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.444 [2024-11-26 04:01:58.020667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.444 [2024-11-26 04:01:58.020738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.010 04:01:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.010 04:01:58 -- common/autotest_common.sh@862 -- # return 0 00:05:57.010 04:01:58 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.010 04:01:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.010 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.010 04:01:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.010 04:01:58 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.010 04:01:58 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.010 04:01:58 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.010 04:01:58 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:57.010 04:01:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.010 04:01:58 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:57.010 04:01:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.010 04:01:58 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.010 04:01:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.010 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.010 [2024-11-26 04:01:58.679679] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70172 has claimed it. 00:05:57.010 request: 00:05:57.010 { 00:05:57.010 "method": "framework_enable_cpumask_locks", 00:05:57.010 "req_id": 1 00:05:57.010 } 00:05:57.010 Got JSON-RPC error response 00:05:57.010 response: 00:05:57.010 { 00:05:57.010 "code": -32603, 00:05:57.010 "message": "Failed to claim CPU core: 2" 00:05:57.010 } 00:05:57.010 04:01:58 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.010 04:01:58 -- common/autotest_common.sh@653 -- # es=1 00:05:57.010 04:01:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.010 04:01:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.010 04:01:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.010 04:01:58 -- event/cpu_locks.sh@158 -- # waitforlisten 70172 /var/tmp/spdk.sock 00:05:57.010 04:01:58 -- common/autotest_common.sh@829 -- # '[' -z 70172 ']' 00:05:57.010 04:01:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.010 04:01:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.010 04:01:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.010 04:01:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.010 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.268 04:01:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.268 04:01:58 -- common/autotest_common.sh@862 -- # return 0 00:05:57.268 04:01:58 -- event/cpu_locks.sh@159 -- # waitforlisten 70190 /var/tmp/spdk2.sock 00:05:57.268 04:01:58 -- common/autotest_common.sh@829 -- # '[' -z 70190 ']' 00:05:57.268 04:01:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.268 04:01:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.268 04:01:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.268 04:01:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.268 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.527 ************************************ 00:05:57.528 END TEST locking_overlapped_coremask_via_rpc 00:05:57.528 ************************************ 00:05:57.528 04:01:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.528 04:01:59 -- common/autotest_common.sh@862 -- # return 0 00:05:57.528 04:01:59 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.528 04:01:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.528 04:01:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.528 04:01:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.528 00:05:57.528 real 0m2.126s 00:05:57.528 user 0m0.948s 00:05:57.528 sys 0m0.112s 00:05:57.528 04:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.528 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:57.528 04:01:59 -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.528 04:01:59 -- event/cpu_locks.sh@15 -- # [[ -z 70172 ]] 00:05:57.528 04:01:59 -- event/cpu_locks.sh@15 -- # killprocess 70172 00:05:57.528 04:01:59 -- common/autotest_common.sh@936 -- # '[' -z 70172 ']' 00:05:57.528 04:01:59 -- common/autotest_common.sh@940 -- # kill -0 70172 00:05:57.528 04:01:59 -- common/autotest_common.sh@941 -- # uname 00:05:57.528 04:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.528 04:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70172 00:05:57.528 killing process with pid 70172 00:05:57.528 04:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.528 04:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.528 04:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70172' 00:05:57.528 04:01:59 -- common/autotest_common.sh@955 -- # kill 70172 00:05:57.528 04:01:59 -- common/autotest_common.sh@960 -- # wait 70172 00:05:57.787 04:01:59 -- event/cpu_locks.sh@16 -- # [[ -z 70190 ]] 00:05:57.787 04:01:59 -- event/cpu_locks.sh@16 -- # killprocess 70190 00:05:57.787 04:01:59 -- common/autotest_common.sh@936 -- # '[' -z 70190 ']' 00:05:57.787 04:01:59 -- common/autotest_common.sh@940 -- # kill -0 70190 00:05:57.787 04:01:59 -- common/autotest_common.sh@941 -- # uname 00:05:57.787 04:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.787 04:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70190 00:05:57.787 killing process with pid 70190 00:05:57.787 04:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:57.787 04:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:57.787 04:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70190' 00:05:57.787 04:01:59 -- common/autotest_common.sh@955 -- # kill 70190 00:05:57.787 04:01:59 -- common/autotest_common.sh@960 -- # wait 70190 00:05:58.044 04:01:59 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.044 Process with pid 70172 is not found 00:05:58.044 04:01:59 -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.044 04:01:59 -- event/cpu_locks.sh@15 -- # [[ -z 70172 ]] 00:05:58.044 04:01:59 -- event/cpu_locks.sh@15 -- # killprocess 70172 00:05:58.044 04:01:59 -- common/autotest_common.sh@936 -- # '[' -z 70172 ']' 00:05:58.044 04:01:59 -- common/autotest_common.sh@940 -- # kill -0 70172 00:05:58.044 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70172) - No such process 00:05:58.044 04:01:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70172 is not found' 00:05:58.044 04:01:59 -- event/cpu_locks.sh@16 -- # [[ -z 70190 ]] 00:05:58.044 04:01:59 -- event/cpu_locks.sh@16 -- # killprocess 70190 00:05:58.044 04:01:59 -- common/autotest_common.sh@936 -- # '[' -z 70190 ']' 00:05:58.044 04:01:59 -- common/autotest_common.sh@940 -- # kill -0 70190 00:05:58.044 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70190) - No such process 00:05:58.044 Process with pid 70190 is not found 00:05:58.044 04:01:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70190 is not found' 00:05:58.044 04:01:59 -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.044 ************************************ 00:05:58.044 END TEST cpu_locks 00:05:58.044 ************************************ 00:05:58.044 00:05:58.044 real 0m15.209s 00:05:58.044 user 0m27.095s 00:05:58.044 sys 0m3.885s 00:05:58.044 04:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.044 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 ************************************ 00:05:58.044 END TEST event 00:05:58.044 ************************************ 00:05:58.044 00:05:58.044 real 0m39.546s 00:05:58.044 user 1m16.155s 00:05:58.044 sys 0m6.838s 00:05:58.044 04:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.044 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 04:01:59 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.044 04:01:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.044 04:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.044 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.044 ************************************ 00:05:58.044 START TEST thread 00:05:58.044 ************************************ 00:05:58.044 04:01:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.302 * Looking for test storage... 00:05:58.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.302 04:01:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.302 04:01:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.302 04:01:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.302 04:01:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.302 04:01:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.302 04:01:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.302 04:01:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.302 04:01:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.302 04:01:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.302 04:01:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.302 04:01:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.302 04:01:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.302 04:01:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.302 04:01:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.302 04:01:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.302 04:01:59 -- scripts/common.sh@344 -- # : 1 00:05:58.302 04:01:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.302 04:01:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.302 04:01:59 -- scripts/common.sh@364 -- # decimal 1 00:05:58.302 04:01:59 -- scripts/common.sh@352 -- # local d=1 00:05:58.302 04:01:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.302 04:01:59 -- scripts/common.sh@354 -- # echo 1 00:05:58.302 04:01:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.302 04:01:59 -- scripts/common.sh@365 -- # decimal 2 00:05:58.302 04:01:59 -- scripts/common.sh@352 -- # local d=2 00:05:58.302 04:01:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.302 04:01:59 -- scripts/common.sh@354 -- # echo 2 00:05:58.302 04:01:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.302 04:01:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.302 04:01:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.302 04:01:59 -- scripts/common.sh@367 -- # return 0 00:05:58.302 04:01:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.302 --rc genhtml_branch_coverage=1 00:05:58.302 --rc genhtml_function_coverage=1 00:05:58.302 --rc genhtml_legend=1 00:05:58.302 --rc geninfo_all_blocks=1 00:05:58.302 --rc geninfo_unexecuted_blocks=1 00:05:58.302 00:05:58.302 ' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.302 --rc genhtml_branch_coverage=1 00:05:58.302 --rc genhtml_function_coverage=1 00:05:58.302 --rc genhtml_legend=1 00:05:58.302 --rc geninfo_all_blocks=1 00:05:58.302 --rc geninfo_unexecuted_blocks=1 00:05:58.302 00:05:58.302 ' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.302 --rc genhtml_branch_coverage=1 00:05:58.302 --rc genhtml_function_coverage=1 00:05:58.302 --rc genhtml_legend=1 00:05:58.302 --rc geninfo_all_blocks=1 00:05:58.302 --rc geninfo_unexecuted_blocks=1 00:05:58.302 00:05:58.302 ' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.302 --rc genhtml_branch_coverage=1 00:05:58.302 --rc genhtml_function_coverage=1 00:05:58.302 --rc genhtml_legend=1 00:05:58.302 --rc geninfo_all_blocks=1 00:05:58.302 --rc geninfo_unexecuted_blocks=1 00:05:58.302 00:05:58.302 ' 00:05:58.302 04:01:59 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.302 04:01:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:58.302 04:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.302 04:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.302 ************************************ 00:05:58.302 START TEST thread_poller_perf 00:05:58.302 ************************************ 00:05:58.302 04:01:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.302 [2024-11-26 04:01:59.934691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.302 [2024-11-26 04:01:59.934899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:05:58.560 [2024-11-26 04:02:00.076197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.560 [2024-11-26 04:02:00.118559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.560 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.494 [2024-11-26T04:02:01.262Z] ====================================== 00:05:59.494 [2024-11-26T04:02:01.262Z] busy:2618328924 (cyc) 00:05:59.494 [2024-11-26T04:02:01.262Z] total_run_count: 294000 00:05:59.494 [2024-11-26T04:02:01.262Z] tsc_hz: 2600000000 (cyc) 00:05:59.494 [2024-11-26T04:02:01.262Z] ====================================== 00:05:59.494 [2024-11-26T04:02:01.262Z] poller_cost: 8905 (cyc), 3425 (nsec) 00:05:59.494 00:05:59.494 real 0m1.298s 00:05:59.494 user 0m1.113s 00:05:59.494 sys 0m0.076s 00:05:59.494 04:02:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.494 ************************************ 00:05:59.494 END TEST thread_poller_perf 00:05:59.494 ************************************ 00:05:59.494 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:05:59.494 04:02:01 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.494 04:02:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.494 04:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.494 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:05:59.494 ************************************ 00:05:59.494 START TEST thread_poller_perf 00:05:59.494 ************************************ 00:05:59.494 04:02:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.752 [2024-11-26 04:02:01.273807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.752 [2024-11-26 04:02:01.274208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70347 ] 00:05:59.752 [2024-11-26 04:02:01.422655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.752 [2024-11-26 04:02:01.464793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.752 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.128 [2024-11-26T04:02:02.896Z] ====================================== 00:06:01.128 [2024-11-26T04:02:02.896Z] busy:2604360974 (cyc) 00:06:01.128 [2024-11-26T04:02:02.896Z] total_run_count: 3929000 00:06:01.128 [2024-11-26T04:02:02.896Z] tsc_hz: 2600000000 (cyc) 00:06:01.128 [2024-11-26T04:02:02.896Z] ====================================== 00:06:01.128 [2024-11-26T04:02:02.896Z] poller_cost: 662 (cyc), 254 (nsec) 00:06:01.128 ************************************ 00:06:01.128 END TEST thread_poller_perf 00:06:01.128 ************************************ 00:06:01.128 00:06:01.128 real 0m1.292s 00:06:01.128 user 0m1.112s 00:06:01.128 sys 0m0.072s 00:06:01.128 04:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.128 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.128 04:02:02 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.128 ************************************ 00:06:01.128 END TEST thread 00:06:01.128 ************************************ 00:06:01.128 00:06:01.128 real 0m2.803s 00:06:01.128 user 0m2.333s 00:06:01.128 sys 0m0.254s 00:06:01.128 04:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.128 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.128 04:02:02 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:01.128 04:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.128 04:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.128 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.128 ************************************ 00:06:01.128 START TEST accel 00:06:01.128 ************************************ 00:06:01.128 04:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:01.128 * Looking for test storage... 00:06:01.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:01.128 04:02:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.128 04:02:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.128 04:02:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.128 04:02:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.128 04:02:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.128 04:02:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.128 04:02:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.128 04:02:02 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.128 04:02:02 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.128 04:02:02 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.128 04:02:02 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.128 04:02:02 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.128 04:02:02 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.128 04:02:02 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.128 04:02:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.128 04:02:02 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.128 04:02:02 -- scripts/common.sh@344 -- # : 1 00:06:01.128 04:02:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.128 04:02:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.128 04:02:02 -- scripts/common.sh@364 -- # decimal 1 00:06:01.128 04:02:02 -- scripts/common.sh@352 -- # local d=1 00:06:01.129 04:02:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.129 04:02:02 -- scripts/common.sh@354 -- # echo 1 00:06:01.129 04:02:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.129 04:02:02 -- scripts/common.sh@365 -- # decimal 2 00:06:01.129 04:02:02 -- scripts/common.sh@352 -- # local d=2 00:06:01.129 04:02:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.129 04:02:02 -- scripts/common.sh@354 -- # echo 2 00:06:01.129 04:02:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.129 04:02:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.129 04:02:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.129 04:02:02 -- scripts/common.sh@367 -- # return 0 00:06:01.129 04:02:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.129 04:02:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.129 --rc genhtml_branch_coverage=1 00:06:01.129 --rc genhtml_function_coverage=1 00:06:01.129 --rc genhtml_legend=1 00:06:01.129 --rc geninfo_all_blocks=1 00:06:01.129 --rc geninfo_unexecuted_blocks=1 00:06:01.129 00:06:01.129 ' 00:06:01.129 04:02:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.129 --rc genhtml_branch_coverage=1 00:06:01.129 --rc genhtml_function_coverage=1 00:06:01.129 --rc genhtml_legend=1 00:06:01.129 --rc geninfo_all_blocks=1 00:06:01.129 --rc geninfo_unexecuted_blocks=1 00:06:01.129 00:06:01.129 ' 00:06:01.129 04:02:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.129 --rc genhtml_branch_coverage=1 00:06:01.129 --rc genhtml_function_coverage=1 00:06:01.129 --rc genhtml_legend=1 00:06:01.129 --rc geninfo_all_blocks=1 00:06:01.129 --rc geninfo_unexecuted_blocks=1 00:06:01.129 00:06:01.129 ' 00:06:01.129 04:02:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.129 --rc genhtml_branch_coverage=1 00:06:01.129 --rc genhtml_function_coverage=1 00:06:01.129 --rc genhtml_legend=1 00:06:01.129 --rc geninfo_all_blocks=1 00:06:01.129 --rc geninfo_unexecuted_blocks=1 00:06:01.129 00:06:01.129 ' 00:06:01.129 04:02:02 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:01.129 04:02:02 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:01.129 04:02:02 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.129 04:02:02 -- accel/accel.sh@59 -- # spdk_tgt_pid=70430 00:06:01.129 04:02:02 -- accel/accel.sh@60 -- # waitforlisten 70430 00:06:01.129 04:02:02 -- common/autotest_common.sh@829 -- # '[' -z 70430 ']' 00:06:01.129 04:02:02 -- accel/accel.sh@58 -- # build_accel_config 00:06:01.129 04:02:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.129 04:02:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.129 04:02:02 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:01.129 04:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.129 04:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.129 04:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.129 04:02:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.129 04:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.129 04:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.129 04:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.129 04:02:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.129 04:02:02 -- accel/accel.sh@42 -- # jq -r . 00:06:01.129 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:01.129 [2024-11-26 04:02:02.816652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.129 [2024-11-26 04:02:02.816920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70430 ] 00:06:01.388 [2024-11-26 04:02:02.963765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.388 [2024-11-26 04:02:03.005474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.388 [2024-11-26 04:02:03.005705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.954 04:02:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.954 04:02:03 -- common/autotest_common.sh@862 -- # return 0 00:06:01.954 04:02:03 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:01.954 04:02:03 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:01.954 04:02:03 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:01.954 04:02:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.954 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.954 04:02:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.954 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.954 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.954 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.955 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.955 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.955 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.955 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # IFS== 00:06:01.955 04:02:03 -- accel/accel.sh@64 -- # read -r opc module 00:06:01.955 04:02:03 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:01.955 04:02:03 -- accel/accel.sh@67 -- # killprocess 70430 00:06:01.955 04:02:03 -- common/autotest_common.sh@936 -- # '[' -z 70430 ']' 00:06:01.955 04:02:03 -- common/autotest_common.sh@940 -- # kill -0 70430 00:06:01.955 04:02:03 -- common/autotest_common.sh@941 -- # uname 00:06:01.955 04:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.955 04:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70430 00:06:01.955 04:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.955 killing process with pid 70430 00:06:01.955 04:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.955 04:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70430' 00:06:01.955 04:02:03 -- common/autotest_common.sh@955 -- # kill 70430 00:06:01.955 04:02:03 -- common/autotest_common.sh@960 -- # wait 70430 00:06:02.213 04:02:03 -- accel/accel.sh@68 -- # trap - ERR 00:06:02.213 04:02:03 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:02.213 04:02:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:02.213 04:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.213 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.213 04:02:03 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:02.213 04:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.213 04:02:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:02.213 04:02:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.213 04:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.213 04:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.213 04:02:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.213 04:02:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.213 04:02:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.213 04:02:03 -- accel/accel.sh@42 -- # jq -r . 00:06:02.472 04:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.472 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.472 04:02:04 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:02.472 04:02:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.472 04:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.472 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.472 ************************************ 00:06:02.472 START TEST accel_missing_filename 00:06:02.472 ************************************ 00:06:02.472 04:02:04 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:02.472 04:02:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:02.472 04:02:04 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:02.472 04:02:04 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:02.472 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.472 04:02:04 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:02.472 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.472 04:02:04 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:02.472 04:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:02.472 04:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.472 04:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.472 04:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.472 04:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.472 04:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.472 04:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.472 04:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.472 04:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:02.472 [2024-11-26 04:02:04.052043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.472 [2024-11-26 04:02:04.052168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70483 ] 00:06:02.472 [2024-11-26 04:02:04.201422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.472 [2024-11-26 04:02:04.231836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.731 [2024-11-26 04:02:04.265255] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.731 [2024-11-26 04:02:04.309713] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:02.731 A filename is required. 00:06:02.731 04:02:04 -- common/autotest_common.sh@653 -- # es=234 00:06:02.731 04:02:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.731 04:02:04 -- common/autotest_common.sh@662 -- # es=106 00:06:02.731 04:02:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.731 04:02:04 -- common/autotest_common.sh@670 -- # es=1 00:06:02.731 04:02:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.731 00:06:02.731 real 0m0.358s 00:06:02.731 user 0m0.175s 00:06:02.731 sys 0m0.110s 00:06:02.731 04:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.731 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 ************************************ 00:06:02.731 END TEST accel_missing_filename 00:06:02.731 ************************************ 00:06:02.731 04:02:04 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.731 04:02:04 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:02.731 04:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.731 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:02.731 ************************************ 00:06:02.731 START TEST accel_compress_verify 00:06:02.731 ************************************ 00:06:02.731 04:02:04 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.731 04:02:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:02.731 04:02:04 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.731 04:02:04 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:02.731 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.731 04:02:04 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:02.731 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.731 04:02:04 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.731 04:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:02.731 04:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.731 04:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.731 04:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.731 04:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.731 04:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.731 04:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.731 04:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.731 04:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:02.731 [2024-11-26 04:02:04.449860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.731 [2024-11-26 04:02:04.449978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70503 ] 00:06:03.006 [2024-11-26 04:02:04.596865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.006 [2024-11-26 04:02:04.629357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.006 [2024-11-26 04:02:04.663381] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.007 [2024-11-26 04:02:04.708221] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:03.297 00:06:03.297 Compression does not support the verify option, aborting. 00:06:03.297 04:02:04 -- common/autotest_common.sh@653 -- # es=161 00:06:03.297 04:02:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.297 04:02:04 -- common/autotest_common.sh@662 -- # es=33 00:06:03.297 04:02:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.297 04:02:04 -- common/autotest_common.sh@670 -- # es=1 00:06:03.297 04:02:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.297 00:06:03.297 real 0m0.361s 00:06:03.297 user 0m0.175s 00:06:03.297 sys 0m0.111s 00:06:03.297 04:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.297 ************************************ 00:06:03.297 END TEST accel_compress_verify 00:06:03.297 ************************************ 00:06:03.297 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 04:02:04 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:03.297 04:02:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:03.297 04:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.297 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 ************************************ 00:06:03.297 START TEST accel_wrong_workload 00:06:03.297 ************************************ 00:06:03.297 04:02:04 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:03.297 04:02:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:03.297 04:02:04 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:03.297 04:02:04 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.297 04:02:04 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:03.297 04:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:03.297 04:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.297 04:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.297 04:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.297 04:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.297 04:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:03.297 Unsupported workload type: foobar 00:06:03.297 [2024-11-26 04:02:04.846336] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:03.297 accel_perf options: 00:06:03.297 [-h help message] 00:06:03.297 [-q queue depth per core] 00:06:03.297 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:03.297 [-T number of threads per core 00:06:03.297 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:03.297 [-t time in seconds] 00:06:03.297 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:03.297 [ dif_verify, , dif_generate, dif_generate_copy 00:06:03.297 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:03.297 [-l for compress/decompress workloads, name of uncompressed input file 00:06:03.297 [-S for crc32c workload, use this seed value (default 0) 00:06:03.297 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:03.297 [-f for fill workload, use this BYTE value (default 255) 00:06:03.297 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:03.297 [-y verify result if this switch is on] 00:06:03.297 [-a tasks to allocate per core (default: same value as -q)] 00:06:03.297 Can be used to spread operations across a wider range of memory. 00:06:03.297 04:02:04 -- common/autotest_common.sh@653 -- # es=1 00:06:03.297 04:02:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.297 04:02:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.297 04:02:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.297 00:06:03.297 real 0m0.048s 00:06:03.297 user 0m0.052s 00:06:03.297 sys 0m0.025s 00:06:03.297 04:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.297 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 ************************************ 00:06:03.297 END TEST accel_wrong_workload 00:06:03.297 ************************************ 00:06:03.297 04:02:04 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:03.297 04:02:04 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:03.297 04:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.297 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.297 ************************************ 00:06:03.297 START TEST accel_negative_buffers 00:06:03.297 ************************************ 00:06:03.297 04:02:04 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:03.297 04:02:04 -- common/autotest_common.sh@650 -- # local es=0 00:06:03.297 04:02:04 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:03.297 04:02:04 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:03.297 04:02:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.297 04:02:04 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:03.297 04:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:03.297 04:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.297 04:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.297 04:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.297 04:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.297 04:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.297 04:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:03.297 -x option must be non-negative. 00:06:03.297 [2024-11-26 04:02:04.932923] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:03.297 accel_perf options: 00:06:03.297 [-h help message] 00:06:03.297 [-q queue depth per core] 00:06:03.297 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:03.297 [-T number of threads per core 00:06:03.298 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:03.298 [-t time in seconds] 00:06:03.298 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:03.298 [ dif_verify, , dif_generate, dif_generate_copy 00:06:03.298 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:03.298 [-l for compress/decompress workloads, name of uncompressed input file 00:06:03.298 [-S for crc32c workload, use this seed value (default 0) 00:06:03.298 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:03.298 [-f for fill workload, use this BYTE value (default 255) 00:06:03.298 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:03.298 [-y verify result if this switch is on] 00:06:03.298 [-a tasks to allocate per core (default: same value as -q)] 00:06:03.298 Can be used to spread operations across a wider range of memory. 00:06:03.298 04:02:04 -- common/autotest_common.sh@653 -- # es=1 00:06:03.298 04:02:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.298 04:02:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.298 04:02:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.298 00:06:03.298 real 0m0.046s 00:06:03.298 user 0m0.047s 00:06:03.298 sys 0m0.027s 00:06:03.298 04:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.298 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.298 ************************************ 00:06:03.298 END TEST accel_negative_buffers 00:06:03.298 ************************************ 00:06:03.298 04:02:04 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:03.298 04:02:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:03.298 04:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.298 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:03.298 ************************************ 00:06:03.298 START TEST accel_crc32c 00:06:03.298 ************************************ 00:06:03.298 04:02:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:03.298 04:02:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.298 04:02:04 -- accel/accel.sh@17 -- # local accel_module 00:06:03.298 04:02:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:03.298 04:02:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.298 04:02:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:03.298 04:02:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.298 04:02:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.298 04:02:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.298 04:02:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.298 04:02:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.298 04:02:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.298 04:02:04 -- accel/accel.sh@42 -- # jq -r . 00:06:03.298 [2024-11-26 04:02:05.013060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.298 [2024-11-26 04:02:05.013178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70570 ] 00:06:03.557 [2024-11-26 04:02:05.161863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.557 [2024-11-26 04:02:05.193793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.933 04:02:06 -- accel/accel.sh@18 -- # out=' 00:06:04.933 SPDK Configuration: 00:06:04.933 Core mask: 0x1 00:06:04.933 00:06:04.933 Accel Perf Configuration: 00:06:04.933 Workload Type: crc32c 00:06:04.933 CRC-32C seed: 32 00:06:04.933 Transfer size: 4096 bytes 00:06:04.933 Vector count 1 00:06:04.933 Module: software 00:06:04.933 Queue depth: 32 00:06:04.933 Allocate depth: 32 00:06:04.933 # threads/core: 1 00:06:04.933 Run time: 1 seconds 00:06:04.933 Verify: Yes 00:06:04.933 00:06:04.933 Running for 1 seconds... 00:06:04.933 00:06:04.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.933 ------------------------------------------------------------------------------------ 00:06:04.933 0,0 451968/s 1765 MiB/s 0 0 00:06:04.933 ==================================================================================== 00:06:04.933 Total 451968/s 1765 MiB/s 0 0' 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:04.933 04:02:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:04.933 04:02:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.933 04:02:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.933 04:02:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.933 04:02:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.933 04:02:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.933 04:02:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.933 04:02:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.933 04:02:06 -- accel/accel.sh@42 -- # jq -r . 00:06:04.933 [2024-11-26 04:02:06.377114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.933 [2024-11-26 04:02:06.377232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:06:04.933 [2024-11-26 04:02:06.523008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.933 [2024-11-26 04:02:06.555801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val=0x1 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.933 04:02:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:04.933 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.933 04:02:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.933 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=32 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=software 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=32 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=32 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=1 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val=Yes 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:04.934 04:02:06 -- accel/accel.sh@21 -- # val= 00:06:04.934 04:02:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # IFS=: 00:06:04.934 04:02:06 -- accel/accel.sh@20 -- # read -r var val 00:06:06.310 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.310 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.310 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.310 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.310 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.310 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.310 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.310 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.311 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.311 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.311 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.311 04:02:07 -- accel/accel.sh@21 -- # val= 00:06:06.311 04:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # IFS=: 00:06:06.311 04:02:07 -- accel/accel.sh@20 -- # read -r var val 00:06:06.311 04:02:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.311 04:02:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:06.311 04:02:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.311 00:06:06.311 real 0m2.720s 00:06:06.311 user 0m2.291s 00:06:06.311 sys 0m0.228s 00:06:06.311 ************************************ 00:06:06.311 04:02:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.311 04:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.311 END TEST accel_crc32c 00:06:06.311 ************************************ 00:06:06.311 04:02:07 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:06.311 04:02:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:06.311 04:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.311 04:02:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.311 ************************************ 00:06:06.311 START TEST accel_crc32c_C2 00:06:06.311 ************************************ 00:06:06.311 04:02:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:06.311 04:02:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.311 04:02:07 -- accel/accel.sh@17 -- # local accel_module 00:06:06.311 04:02:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:06.311 04:02:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:06.311 04:02:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.311 04:02:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.311 04:02:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.311 04:02:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.311 04:02:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.311 04:02:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.311 04:02:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.311 04:02:07 -- accel/accel.sh@42 -- # jq -r . 00:06:06.311 [2024-11-26 04:02:07.762150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.311 [2024-11-26 04:02:07.762254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70621 ] 00:06:06.311 [2024-11-26 04:02:07.910053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.311 [2024-11-26 04:02:07.947217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.687 04:02:09 -- accel/accel.sh@18 -- # out=' 00:06:07.687 SPDK Configuration: 00:06:07.687 Core mask: 0x1 00:06:07.687 00:06:07.687 Accel Perf Configuration: 00:06:07.687 Workload Type: crc32c 00:06:07.687 CRC-32C seed: 0 00:06:07.687 Transfer size: 4096 bytes 00:06:07.687 Vector count 2 00:06:07.687 Module: software 00:06:07.687 Queue depth: 32 00:06:07.687 Allocate depth: 32 00:06:07.687 # threads/core: 1 00:06:07.687 Run time: 1 seconds 00:06:07.687 Verify: Yes 00:06:07.687 00:06:07.687 Running for 1 seconds... 00:06:07.687 00:06:07.687 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.687 ------------------------------------------------------------------------------------ 00:06:07.687 0,0 389632/s 3044 MiB/s 0 0 00:06:07.687 ==================================================================================== 00:06:07.688 Total 389632/s 1522 MiB/s 0 0' 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:07.688 04:02:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.688 04:02:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.688 04:02:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:07.688 04:02:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.688 04:02:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.688 04:02:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.688 04:02:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.688 04:02:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.688 04:02:09 -- accel/accel.sh@42 -- # jq -r . 00:06:07.688 [2024-11-26 04:02:09.117369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.688 [2024-11-26 04:02:09.117457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70640 ] 00:06:07.688 [2024-11-26 04:02:09.259844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.688 [2024-11-26 04:02:09.288978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=0x1 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=crc32c 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=0 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=software 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=32 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=32 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=1 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val=Yes 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:07.688 04:02:09 -- accel/accel.sh@21 -- # val= 00:06:07.688 04:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # IFS=: 00:06:07.688 04:02:09 -- accel/accel.sh@20 -- # read -r var val 00:06:09.063 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.063 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.063 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.063 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.063 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.063 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.063 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.064 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.064 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.064 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.064 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.064 04:02:10 -- accel/accel.sh@21 -- # val= 00:06:09.064 04:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # IFS=: 00:06:09.064 04:02:10 -- accel/accel.sh@20 -- # read -r var val 00:06:09.064 04:02:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.064 04:02:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:09.064 04:02:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.064 00:06:09.064 real 0m2.680s 00:06:09.064 user 0m2.276s 00:06:09.064 sys 0m0.200s 00:06:09.064 04:02:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.064 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.064 ************************************ 00:06:09.064 END TEST accel_crc32c_C2 00:06:09.064 ************************************ 00:06:09.064 04:02:10 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:09.064 04:02:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.064 04:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.064 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:06:09.064 ************************************ 00:06:09.064 START TEST accel_copy 00:06:09.064 ************************************ 00:06:09.064 04:02:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:09.064 04:02:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.064 04:02:10 -- accel/accel.sh@17 -- # local accel_module 00:06:09.064 04:02:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:09.064 04:02:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.064 04:02:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.064 04:02:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:09.064 04:02:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.064 04:02:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.064 04:02:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.064 04:02:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.064 04:02:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.064 04:02:10 -- accel/accel.sh@42 -- # jq -r . 00:06:09.064 [2024-11-26 04:02:10.489058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.064 [2024-11-26 04:02:10.489176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70671 ] 00:06:09.064 [2024-11-26 04:02:10.635649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.064 [2024-11-26 04:02:10.665533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.441 04:02:11 -- accel/accel.sh@18 -- # out=' 00:06:10.441 SPDK Configuration: 00:06:10.441 Core mask: 0x1 00:06:10.441 00:06:10.441 Accel Perf Configuration: 00:06:10.441 Workload Type: copy 00:06:10.441 Transfer size: 4096 bytes 00:06:10.441 Vector count 1 00:06:10.441 Module: software 00:06:10.441 Queue depth: 32 00:06:10.441 Allocate depth: 32 00:06:10.441 # threads/core: 1 00:06:10.441 Run time: 1 seconds 00:06:10.441 Verify: Yes 00:06:10.441 00:06:10.441 Running for 1 seconds... 00:06:10.441 00:06:10.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.441 ------------------------------------------------------------------------------------ 00:06:10.441 0,0 383872/s 1499 MiB/s 0 0 00:06:10.441 ==================================================================================== 00:06:10.441 Total 383872/s 1499 MiB/s 0 0' 00:06:10.441 04:02:11 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:11 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:10.441 04:02:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:10.441 04:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.441 04:02:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.441 04:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.441 04:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.441 04:02:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.441 04:02:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.441 04:02:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.441 04:02:11 -- accel/accel.sh@42 -- # jq -r . 00:06:10.441 [2024-11-26 04:02:11.826998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.441 [2024-11-26 04:02:11.827718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70692 ] 00:06:10.441 [2024-11-26 04:02:11.979052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.441 [2024-11-26 04:02:12.008692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=0x1 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=copy 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=software 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=32 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=32 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=1 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val=Yes 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:10.441 04:02:12 -- accel/accel.sh@21 -- # val= 00:06:10.441 04:02:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # IFS=: 00:06:10.441 04:02:12 -- accel/accel.sh@20 -- # read -r var val 00:06:11.376 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.376 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.376 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.376 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.376 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.376 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.376 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.376 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.376 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.376 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.376 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.647 04:02:13 -- accel/accel.sh@21 -- # val= 00:06:11.647 04:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.647 04:02:13 -- accel/accel.sh@20 -- # IFS=: 00:06:11.647 04:02:13 -- accel/accel.sh@20 -- # read -r var val 00:06:11.647 04:02:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.647 04:02:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:11.647 04:02:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.647 00:06:11.647 real 0m2.690s 00:06:11.647 user 0m2.288s 00:06:11.647 sys 0m0.202s 00:06:11.647 04:02:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.647 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.647 ************************************ 00:06:11.647 END TEST accel_copy 00:06:11.647 ************************************ 00:06:11.647 04:02:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.647 04:02:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:11.647 04:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.647 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.647 ************************************ 00:06:11.648 START TEST accel_fill 00:06:11.648 ************************************ 00:06:11.648 04:02:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.648 04:02:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.648 04:02:13 -- accel/accel.sh@17 -- # local accel_module 00:06:11.648 04:02:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.648 04:02:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:11.648 04:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.648 04:02:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.648 04:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.648 04:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.648 04:02:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.648 04:02:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.648 04:02:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.648 04:02:13 -- accel/accel.sh@42 -- # jq -r . 00:06:11.648 [2024-11-26 04:02:13.216685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.648 [2024-11-26 04:02:13.216936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:06:11.648 [2024-11-26 04:02:13.356321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.648 [2024-11-26 04:02:13.386204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.028 04:02:14 -- accel/accel.sh@18 -- # out=' 00:06:13.028 SPDK Configuration: 00:06:13.028 Core mask: 0x1 00:06:13.028 00:06:13.028 Accel Perf Configuration: 00:06:13.028 Workload Type: fill 00:06:13.028 Fill pattern: 0x80 00:06:13.028 Transfer size: 4096 bytes 00:06:13.028 Vector count 1 00:06:13.028 Module: software 00:06:13.028 Queue depth: 64 00:06:13.028 Allocate depth: 64 00:06:13.028 # threads/core: 1 00:06:13.028 Run time: 1 seconds 00:06:13.028 Verify: Yes 00:06:13.028 00:06:13.028 Running for 1 seconds... 00:06:13.028 00:06:13.028 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.028 ------------------------------------------------------------------------------------ 00:06:13.028 0,0 612032/s 2390 MiB/s 0 0 00:06:13.028 ==================================================================================== 00:06:13.028 Total 612032/s 2390 MiB/s 0 0' 00:06:13.028 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.028 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.028 04:02:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.028 04:02:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:13.028 04:02:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.029 04:02:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.029 04:02:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.029 04:02:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.029 04:02:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.029 04:02:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.029 04:02:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.029 04:02:14 -- accel/accel.sh@42 -- # jq -r . 00:06:13.029 [2024-11-26 04:02:14.556428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.029 [2024-11-26 04:02:14.556556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:06:13.029 [2024-11-26 04:02:14.703591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.029 [2024-11-26 04:02:14.734315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=0x1 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=fill 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=0x80 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=software 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=64 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=64 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=1 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val=Yes 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:13.029 04:02:14 -- accel/accel.sh@21 -- # val= 00:06:13.029 04:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # IFS=: 00:06:13.029 04:02:14 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 04:02:15 -- accel/accel.sh@21 -- # val= 00:06:14.441 04:02:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # IFS=: 00:06:14.441 04:02:15 -- accel/accel.sh@20 -- # read -r var val 00:06:14.441 ************************************ 00:06:14.441 END TEST accel_fill 00:06:14.441 ************************************ 00:06:14.441 04:02:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.441 04:02:15 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:14.441 04:02:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.441 00:06:14.441 real 0m2.682s 00:06:14.441 user 0m2.271s 00:06:14.441 sys 0m0.210s 00:06:14.441 04:02:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.441 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.441 04:02:15 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:14.441 04:02:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.441 04:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.441 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.441 ************************************ 00:06:14.441 START TEST accel_copy_crc32c 00:06:14.441 ************************************ 00:06:14.441 04:02:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:14.441 04:02:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.441 04:02:15 -- accel/accel.sh@17 -- # local accel_module 00:06:14.441 04:02:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:14.441 04:02:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:14.441 04:02:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.441 04:02:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.441 04:02:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.441 04:02:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.441 04:02:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.441 04:02:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.441 04:02:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.441 04:02:15 -- accel/accel.sh@42 -- # jq -r . 00:06:14.441 [2024-11-26 04:02:15.934355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.441 [2024-11-26 04:02:15.934518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:06:14.441 [2024-11-26 04:02:16.087413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.441 [2024-11-26 04:02:16.117915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.817 04:02:17 -- accel/accel.sh@18 -- # out=' 00:06:15.817 SPDK Configuration: 00:06:15.817 Core mask: 0x1 00:06:15.817 00:06:15.817 Accel Perf Configuration: 00:06:15.817 Workload Type: copy_crc32c 00:06:15.817 CRC-32C seed: 0 00:06:15.817 Vector size: 4096 bytes 00:06:15.817 Transfer size: 4096 bytes 00:06:15.817 Vector count 1 00:06:15.817 Module: software 00:06:15.817 Queue depth: 32 00:06:15.817 Allocate depth: 32 00:06:15.817 # threads/core: 1 00:06:15.817 Run time: 1 seconds 00:06:15.817 Verify: Yes 00:06:15.817 00:06:15.817 Running for 1 seconds... 00:06:15.817 00:06:15.817 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.817 ------------------------------------------------------------------------------------ 00:06:15.817 0,0 308288/s 1204 MiB/s 0 0 00:06:15.817 ==================================================================================== 00:06:15.817 Total 308288/s 1204 MiB/s 0 0' 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:15.817 04:02:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.817 04:02:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:15.817 04:02:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.817 04:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.817 04:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.817 04:02:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.817 04:02:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.817 04:02:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.817 04:02:17 -- accel/accel.sh@42 -- # jq -r . 00:06:15.817 [2024-11-26 04:02:17.281893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.817 [2024-11-26 04:02:17.282037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70793 ] 00:06:15.817 [2024-11-26 04:02:17.436052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.817 [2024-11-26 04:02:17.466732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=0x1 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=0 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=software 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=32 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=32 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=1 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val=Yes 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.817 04:02:17 -- accel/accel.sh@21 -- # val= 00:06:15.817 04:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.817 04:02:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.818 04:02:17 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@21 -- # val= 00:06:17.191 04:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # IFS=: 00:06:17.191 04:02:18 -- accel/accel.sh@20 -- # read -r var val 00:06:17.191 04:02:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.191 04:02:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:17.191 04:02:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.191 00:06:17.191 real 0m2.689s 00:06:17.191 user 0m2.282s 00:06:17.191 sys 0m0.205s 00:06:17.191 04:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.191 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.191 ************************************ 00:06:17.191 END TEST accel_copy_crc32c 00:06:17.191 ************************************ 00:06:17.191 04:02:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.191 04:02:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:17.191 04:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.191 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.191 ************************************ 00:06:17.191 START TEST accel_copy_crc32c_C2 00:06:17.191 ************************************ 00:06:17.191 04:02:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.191 04:02:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.191 04:02:18 -- accel/accel.sh@17 -- # local accel_module 00:06:17.191 04:02:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:17.191 04:02:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:17.191 04:02:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.191 04:02:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.191 04:02:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.191 04:02:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.191 04:02:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.191 04:02:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.191 04:02:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.191 04:02:18 -- accel/accel.sh@42 -- # jq -r . 00:06:17.191 [2024-11-26 04:02:18.653089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.191 [2024-11-26 04:02:18.653179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70828 ] 00:06:17.191 [2024-11-26 04:02:18.794913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.191 [2024-11-26 04:02:18.823634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.566 04:02:19 -- accel/accel.sh@18 -- # out=' 00:06:18.566 SPDK Configuration: 00:06:18.566 Core mask: 0x1 00:06:18.566 00:06:18.566 Accel Perf Configuration: 00:06:18.566 Workload Type: copy_crc32c 00:06:18.566 CRC-32C seed: 0 00:06:18.566 Vector size: 4096 bytes 00:06:18.566 Transfer size: 8192 bytes 00:06:18.566 Vector count 2 00:06:18.566 Module: software 00:06:18.566 Queue depth: 32 00:06:18.566 Allocate depth: 32 00:06:18.566 # threads/core: 1 00:06:18.566 Run time: 1 seconds 00:06:18.566 Verify: Yes 00:06:18.566 00:06:18.566 Running for 1 seconds... 00:06:18.566 00:06:18.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.566 ------------------------------------------------------------------------------------ 00:06:18.566 0,0 231008/s 1804 MiB/s 0 0 00:06:18.566 ==================================================================================== 00:06:18.566 Total 231008/s 902 MiB/s 0 0' 00:06:18.566 04:02:19 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:19 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.566 04:02:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.566 04:02:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.566 04:02:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.566 04:02:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.566 04:02:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.566 04:02:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.566 04:02:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.566 04:02:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.566 04:02:19 -- accel/accel.sh@42 -- # jq -r . 00:06:18.566 [2024-11-26 04:02:19.983829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.566 [2024-11-26 04:02:19.984129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:06:18.566 [2024-11-26 04:02:20.131942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.566 [2024-11-26 04:02:20.162544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val=0x1 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val=0 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.566 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.566 04:02:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:18.566 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val=software 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val=32 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val=32 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val=1 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val=Yes 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.567 04:02:20 -- accel/accel.sh@21 -- # val= 00:06:18.567 04:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.567 04:02:20 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@21 -- # val= 00:06:19.943 04:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # IFS=: 00:06:19.943 04:02:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.943 04:02:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.943 04:02:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:19.943 04:02:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.943 00:06:19.943 real 0m2.673s 00:06:19.943 user 0m2.272s 00:06:19.943 sys 0m0.200s 00:06:19.943 04:02:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.943 ************************************ 00:06:19.943 END TEST accel_copy_crc32c_C2 00:06:19.943 ************************************ 00:06:19.943 04:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.943 04:02:21 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.943 04:02:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.943 04:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.943 04:02:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.943 ************************************ 00:06:19.943 START TEST accel_dualcast 00:06:19.943 ************************************ 00:06:19.943 04:02:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:19.943 04:02:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.943 04:02:21 -- accel/accel.sh@17 -- # local accel_module 00:06:19.943 04:02:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:19.943 04:02:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.944 04:02:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.944 04:02:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.944 04:02:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.944 04:02:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.944 04:02:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.944 04:02:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.944 04:02:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.944 04:02:21 -- accel/accel.sh@42 -- # jq -r . 00:06:19.944 [2024-11-26 04:02:21.368733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.944 [2024-11-26 04:02:21.368854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:06:19.944 [2024-11-26 04:02:21.515636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.944 [2024-11-26 04:02:21.547376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.320 04:02:22 -- accel/accel.sh@18 -- # out=' 00:06:21.320 SPDK Configuration: 00:06:21.320 Core mask: 0x1 00:06:21.320 00:06:21.320 Accel Perf Configuration: 00:06:21.320 Workload Type: dualcast 00:06:21.320 Transfer size: 4096 bytes 00:06:21.320 Vector count 1 00:06:21.320 Module: software 00:06:21.320 Queue depth: 32 00:06:21.320 Allocate depth: 32 00:06:21.320 # threads/core: 1 00:06:21.320 Run time: 1 seconds 00:06:21.320 Verify: Yes 00:06:21.320 00:06:21.320 Running for 1 seconds... 00:06:21.320 00:06:21.320 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.320 ------------------------------------------------------------------------------------ 00:06:21.320 0,0 456256/s 1782 MiB/s 0 0 00:06:21.320 ==================================================================================== 00:06:21.320 Total 456256/s 1782 MiB/s 0 0' 00:06:21.320 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.320 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.320 04:02:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:21.320 04:02:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:21.320 04:02:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.320 04:02:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.320 04:02:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.320 04:02:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.320 04:02:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.320 04:02:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.320 04:02:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.320 04:02:22 -- accel/accel.sh@42 -- # jq -r . 00:06:21.320 [2024-11-26 04:02:22.722434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.320 [2024-11-26 04:02:22.722646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70899 ] 00:06:21.320 [2024-11-26 04:02:22.884650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.320 [2024-11-26 04:02:22.917079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.320 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.320 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.320 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.320 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.320 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=0x1 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=dualcast 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=software 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=32 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=32 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=1 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val=Yes 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.321 04:02:22 -- accel/accel.sh@21 -- # val= 00:06:21.321 04:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.321 04:02:22 -- accel/accel.sh@20 -- # read -r var val 00:06:22.696 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.696 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.696 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.696 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.696 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.696 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.696 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.697 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.697 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.697 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.697 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.697 04:02:24 -- accel/accel.sh@21 -- # val= 00:06:22.697 04:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # IFS=: 00:06:22.697 04:02:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.697 ************************************ 00:06:22.697 END TEST accel_dualcast 00:06:22.697 ************************************ 00:06:22.697 04:02:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.697 04:02:24 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:22.697 04:02:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.697 00:06:22.697 real 0m2.711s 00:06:22.697 user 0m2.286s 00:06:22.697 sys 0m0.217s 00:06:22.697 04:02:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.697 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.697 04:02:24 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:22.697 04:02:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.697 04:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.697 04:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.697 ************************************ 00:06:22.697 START TEST accel_compare 00:06:22.697 ************************************ 00:06:22.697 04:02:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:22.697 04:02:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.697 04:02:24 -- accel/accel.sh@17 -- # local accel_module 00:06:22.697 04:02:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:22.697 04:02:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:22.697 04:02:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.697 04:02:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.697 04:02:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.697 04:02:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.697 04:02:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.697 04:02:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.697 04:02:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.697 04:02:24 -- accel/accel.sh@42 -- # jq -r . 00:06:22.697 [2024-11-26 04:02:24.120982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.697 [2024-11-26 04:02:24.121118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:06:22.697 [2024-11-26 04:02:24.275252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.697 [2024-11-26 04:02:24.307404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.705 04:02:25 -- accel/accel.sh@18 -- # out=' 00:06:23.705 SPDK Configuration: 00:06:23.705 Core mask: 0x1 00:06:23.705 00:06:23.705 Accel Perf Configuration: 00:06:23.705 Workload Type: compare 00:06:23.705 Transfer size: 4096 bytes 00:06:23.705 Vector count 1 00:06:23.705 Module: software 00:06:23.705 Queue depth: 32 00:06:23.705 Allocate depth: 32 00:06:23.705 # threads/core: 1 00:06:23.705 Run time: 1 seconds 00:06:23.705 Verify: Yes 00:06:23.705 00:06:23.705 Running for 1 seconds... 00:06:23.705 00:06:23.705 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.705 ------------------------------------------------------------------------------------ 00:06:23.705 0,0 557536/s 2177 MiB/s 0 0 00:06:23.705 ==================================================================================== 00:06:23.705 Total 557536/s 2177 MiB/s 0 0' 00:06:23.705 04:02:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:23.705 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.705 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.705 04:02:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:23.705 04:02:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.705 04:02:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.705 04:02:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.705 04:02:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.705 04:02:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.705 04:02:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.705 04:02:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.705 04:02:25 -- accel/accel.sh@42 -- # jq -r . 00:06:23.964 [2024-11-26 04:02:25.480139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.964 [2024-11-26 04:02:25.480280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70950 ] 00:06:23.964 [2024-11-26 04:02:25.631896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.964 [2024-11-26 04:02:25.663598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=0x1 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=compare 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=software 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=32 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=32 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=1 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val=Yes 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:23.964 04:02:25 -- accel/accel.sh@21 -- # val= 00:06:23.964 04:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # IFS=: 00:06:23.964 04:02:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 04:02:26 -- accel/accel.sh@21 -- # val= 00:06:25.339 04:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.339 04:02:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.339 ************************************ 00:06:25.339 END TEST accel_compare 00:06:25.339 ************************************ 00:06:25.339 04:02:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.339 04:02:26 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:25.339 04:02:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.339 00:06:25.339 real 0m2.720s 00:06:25.339 user 0m2.293s 00:06:25.339 sys 0m0.221s 00:06:25.339 04:02:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.339 04:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.339 04:02:26 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:25.339 04:02:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:25.339 04:02:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.339 04:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.339 ************************************ 00:06:25.339 START TEST accel_xor 00:06:25.339 ************************************ 00:06:25.339 04:02:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:25.339 04:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.339 04:02:26 -- accel/accel.sh@17 -- # local accel_module 00:06:25.339 04:02:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:25.339 04:02:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:25.339 04:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.339 04:02:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.339 04:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.339 04:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.339 04:02:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.339 04:02:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.339 04:02:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.339 04:02:26 -- accel/accel.sh@42 -- # jq -r . 00:06:25.339 [2024-11-26 04:02:26.878360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.339 [2024-11-26 04:02:26.878472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:06:25.339 [2024-11-26 04:02:27.024529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.339 [2024-11-26 04:02:27.056472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.713 04:02:28 -- accel/accel.sh@18 -- # out=' 00:06:26.713 SPDK Configuration: 00:06:26.713 Core mask: 0x1 00:06:26.713 00:06:26.713 Accel Perf Configuration: 00:06:26.713 Workload Type: xor 00:06:26.713 Source buffers: 2 00:06:26.713 Transfer size: 4096 bytes 00:06:26.713 Vector count 1 00:06:26.713 Module: software 00:06:26.713 Queue depth: 32 00:06:26.713 Allocate depth: 32 00:06:26.713 # threads/core: 1 00:06:26.713 Run time: 1 seconds 00:06:26.713 Verify: Yes 00:06:26.713 00:06:26.713 Running for 1 seconds... 00:06:26.713 00:06:26.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.713 ------------------------------------------------------------------------------------ 00:06:26.713 0,0 421152/s 1645 MiB/s 0 0 00:06:26.713 ==================================================================================== 00:06:26.713 Total 421152/s 1645 MiB/s 0 0' 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:26.713 04:02:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.713 04:02:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:26.713 04:02:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.713 04:02:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.713 04:02:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.713 04:02:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.713 04:02:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.713 04:02:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.713 04:02:28 -- accel/accel.sh@42 -- # jq -r . 00:06:26.713 [2024-11-26 04:02:28.228400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.713 [2024-11-26 04:02:28.228779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:06:26.713 [2024-11-26 04:02:28.368704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.713 [2024-11-26 04:02:28.400062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=0x1 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=xor 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=2 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=software 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=32 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=32 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.713 04:02:28 -- accel/accel.sh@21 -- # val=1 00:06:26.713 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.713 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.714 04:02:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.714 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.714 04:02:28 -- accel/accel.sh@21 -- # val=Yes 00:06:26.714 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.714 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.714 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:26.714 04:02:28 -- accel/accel.sh@21 -- # val= 00:06:26.714 04:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # IFS=: 00:06:26.714 04:02:28 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@21 -- # val= 00:06:28.089 04:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.089 04:02:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.089 04:02:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.089 04:02:29 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:28.089 04:02:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.089 ************************************ 00:06:28.089 END TEST accel_xor 00:06:28.089 ************************************ 00:06:28.089 00:06:28.089 real 0m2.687s 00:06:28.089 user 0m2.275s 00:06:28.089 sys 0m0.206s 00:06:28.089 04:02:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.089 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.089 04:02:29 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.089 04:02:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:28.089 04:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.089 04:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.089 ************************************ 00:06:28.089 START TEST accel_xor 00:06:28.089 ************************************ 00:06:28.089 04:02:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.089 04:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.089 04:02:29 -- accel/accel.sh@17 -- # local accel_module 00:06:28.089 04:02:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.089 04:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.089 04:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.089 04:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.089 04:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.089 04:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.089 04:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.089 04:02:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.089 04:02:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.089 04:02:29 -- accel/accel.sh@42 -- # jq -r . 00:06:28.089 [2024-11-26 04:02:29.613176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.089 [2024-11-26 04:02:29.613276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:06:28.089 [2024-11-26 04:02:29.755571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.089 [2024-11-26 04:02:29.787334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.465 04:02:30 -- accel/accel.sh@18 -- # out=' 00:06:29.465 SPDK Configuration: 00:06:29.465 Core mask: 0x1 00:06:29.465 00:06:29.465 Accel Perf Configuration: 00:06:29.465 Workload Type: xor 00:06:29.465 Source buffers: 3 00:06:29.465 Transfer size: 4096 bytes 00:06:29.465 Vector count 1 00:06:29.465 Module: software 00:06:29.465 Queue depth: 32 00:06:29.465 Allocate depth: 32 00:06:29.465 # threads/core: 1 00:06:29.465 Run time: 1 seconds 00:06:29.465 Verify: Yes 00:06:29.465 00:06:29.465 Running for 1 seconds... 00:06:29.465 00:06:29.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.465 ------------------------------------------------------------------------------------ 00:06:29.465 0,0 415296/s 1622 MiB/s 0 0 00:06:29.465 ==================================================================================== 00:06:29.465 Total 415296/s 1622 MiB/s 0 0' 00:06:29.465 04:02:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.465 04:02:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.465 04:02:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:29.465 04:02:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:29.465 04:02:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.465 04:02:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.465 04:02:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.465 04:02:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.465 04:02:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.465 04:02:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.465 04:02:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.465 04:02:30 -- accel/accel.sh@42 -- # jq -r . 00:06:29.465 [2024-11-26 04:02:30.940831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.465 [2024-11-26 04:02:30.940922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:06:29.465 [2024-11-26 04:02:31.080727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.465 [2024-11-26 04:02:31.111332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=0x1 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=xor 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=3 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=software 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=32 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=32 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=1 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val=Yes 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:29.466 04:02:31 -- accel/accel.sh@21 -- # val= 00:06:29.466 04:02:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # IFS=: 00:06:29.466 04:02:31 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@21 -- # val= 00:06:30.843 04:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.843 04:02:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.843 04:02:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.843 04:02:32 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:30.843 04:02:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.843 00:06:30.843 real 0m2.663s 00:06:30.843 user 0m1.144s 00:06:30.843 sys 0m0.099s 00:06:30.843 04:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.843 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 END TEST accel_xor 00:06:30.843 ************************************ 00:06:30.843 04:02:32 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.843 04:02:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:30.843 04:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.843 04:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 START TEST accel_dif_verify 00:06:30.843 ************************************ 00:06:30.843 04:02:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:30.843 04:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.843 04:02:32 -- accel/accel.sh@17 -- # local accel_module 00:06:30.843 04:02:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:30.843 04:02:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.843 04:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.843 04:02:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.843 04:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.843 04:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.843 04:02:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.843 04:02:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.843 04:02:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.843 04:02:32 -- accel/accel.sh@42 -- # jq -r . 00:06:30.843 [2024-11-26 04:02:32.310715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.843 [2024-11-26 04:02:32.310809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 00:06:30.843 [2024-11-26 04:02:32.450036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.843 [2024-11-26 04:02:32.482157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.217 04:02:33 -- accel/accel.sh@18 -- # out=' 00:06:32.217 SPDK Configuration: 00:06:32.217 Core mask: 0x1 00:06:32.217 00:06:32.217 Accel Perf Configuration: 00:06:32.217 Workload Type: dif_verify 00:06:32.217 Vector size: 4096 bytes 00:06:32.217 Transfer size: 4096 bytes 00:06:32.218 Block size: 512 bytes 00:06:32.218 Metadata size: 8 bytes 00:06:32.218 Vector count 1 00:06:32.218 Module: software 00:06:32.218 Queue depth: 32 00:06:32.218 Allocate depth: 32 00:06:32.218 # threads/core: 1 00:06:32.218 Run time: 1 seconds 00:06:32.218 Verify: No 00:06:32.218 00:06:32.218 Running for 1 seconds... 00:06:32.218 00:06:32.218 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.218 ------------------------------------------------------------------------------------ 00:06:32.218 0,0 126336/s 501 MiB/s 0 0 00:06:32.218 ==================================================================================== 00:06:32.218 Total 126336/s 493 MiB/s 0 0' 00:06:32.218 04:02:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.218 04:02:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.218 04:02:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.218 04:02:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.218 04:02:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.218 04:02:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.218 04:02:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.218 04:02:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.218 04:02:33 -- accel/accel.sh@42 -- # jq -r . 00:06:32.218 [2024-11-26 04:02:33.649306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.218 [2024-11-26 04:02:33.649459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71109 ] 00:06:32.218 [2024-11-26 04:02:33.804168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.218 [2024-11-26 04:02:33.835674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=0x1 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=dif_verify 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=software 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=32 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=32 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=1 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val=No 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.218 04:02:33 -- accel/accel.sh@21 -- # val= 00:06:32.218 04:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.218 04:02:33 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@21 -- # val= 00:06:33.607 04:02:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # IFS=: 00:06:33.607 04:02:34 -- accel/accel.sh@20 -- # read -r var val 00:06:33.607 04:02:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.607 04:02:34 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:33.607 04:02:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.607 00:06:33.607 real 0m2.695s 00:06:33.607 user 0m2.282s 00:06:33.607 sys 0m0.208s 00:06:33.607 04:02:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.607 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:33.607 ************************************ 00:06:33.607 END TEST accel_dif_verify 00:06:33.607 ************************************ 00:06:33.607 04:02:35 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:33.607 04:02:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:33.607 04:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.607 04:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.607 ************************************ 00:06:33.607 START TEST accel_dif_generate 00:06:33.607 ************************************ 00:06:33.607 04:02:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:33.607 04:02:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.607 04:02:35 -- accel/accel.sh@17 -- # local accel_module 00:06:33.607 04:02:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:33.607 04:02:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.607 04:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.607 04:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.607 04:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.607 04:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.607 04:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.607 04:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.607 04:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.607 04:02:35 -- accel/accel.sh@42 -- # jq -r . 00:06:33.607 [2024-11-26 04:02:35.046361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.607 [2024-11-26 04:02:35.046463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:06:33.607 [2024-11-26 04:02:35.188566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.607 [2024-11-26 04:02:35.219944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.979 04:02:36 -- accel/accel.sh@18 -- # out=' 00:06:34.979 SPDK Configuration: 00:06:34.979 Core mask: 0x1 00:06:34.979 00:06:34.979 Accel Perf Configuration: 00:06:34.979 Workload Type: dif_generate 00:06:34.979 Vector size: 4096 bytes 00:06:34.979 Transfer size: 4096 bytes 00:06:34.979 Block size: 512 bytes 00:06:34.979 Metadata size: 8 bytes 00:06:34.979 Vector count 1 00:06:34.979 Module: software 00:06:34.979 Queue depth: 32 00:06:34.979 Allocate depth: 32 00:06:34.979 # threads/core: 1 00:06:34.979 Run time: 1 seconds 00:06:34.979 Verify: No 00:06:34.979 00:06:34.979 Running for 1 seconds... 00:06:34.979 00:06:34.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.979 ------------------------------------------------------------------------------------ 00:06:34.979 0,0 151200/s 599 MiB/s 0 0 00:06:34.979 ==================================================================================== 00:06:34.979 Total 151200/s 590 MiB/s 0 0' 00:06:34.979 04:02:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:34.979 04:02:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.979 04:02:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.979 04:02:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.979 04:02:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.979 04:02:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.979 04:02:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.979 04:02:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.979 04:02:36 -- accel/accel.sh@42 -- # jq -r . 00:06:34.979 [2024-11-26 04:02:36.390150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.979 [2024-11-26 04:02:36.390262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71165 ] 00:06:34.979 [2024-11-26 04:02:36.532279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.979 [2024-11-26 04:02:36.560973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=0x1 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=dif_generate 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=software 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=32 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=32 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=1 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val=No 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.979 04:02:36 -- accel/accel.sh@21 -- # val= 00:06:34.979 04:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.979 04:02:36 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 04:02:37 -- accel/accel.sh@21 -- # val= 00:06:36.352 04:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # IFS=: 00:06:36.352 04:02:37 -- accel/accel.sh@20 -- # read -r var val 00:06:36.352 ************************************ 00:06:36.352 END TEST accel_dif_generate 00:06:36.352 ************************************ 00:06:36.352 04:02:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.352 04:02:37 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:36.352 04:02:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.352 00:06:36.352 real 0m2.674s 00:06:36.352 user 0m2.273s 00:06:36.352 sys 0m0.201s 00:06:36.352 04:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.352 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.352 04:02:37 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:36.352 04:02:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:36.352 04:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.352 04:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.352 ************************************ 00:06:36.352 START TEST accel_dif_generate_copy 00:06:36.352 ************************************ 00:06:36.352 04:02:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:36.352 04:02:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.352 04:02:37 -- accel/accel.sh@17 -- # local accel_module 00:06:36.352 04:02:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.352 04:02:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.352 04:02:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.352 04:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.352 04:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.352 04:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.352 04:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.352 04:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.352 04:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.352 04:02:37 -- accel/accel.sh@42 -- # jq -r . 00:06:36.352 [2024-11-26 04:02:37.769341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.352 [2024-11-26 04:02:37.769423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71195 ] 00:06:36.352 [2024-11-26 04:02:37.911066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.352 [2024-11-26 04:02:37.938433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.726 04:02:39 -- accel/accel.sh@18 -- # out=' 00:06:37.726 SPDK Configuration: 00:06:37.726 Core mask: 0x1 00:06:37.726 00:06:37.726 Accel Perf Configuration: 00:06:37.726 Workload Type: dif_generate_copy 00:06:37.726 Vector size: 4096 bytes 00:06:37.726 Transfer size: 4096 bytes 00:06:37.726 Vector count 1 00:06:37.726 Module: software 00:06:37.726 Queue depth: 32 00:06:37.726 Allocate depth: 32 00:06:37.726 # threads/core: 1 00:06:37.726 Run time: 1 seconds 00:06:37.726 Verify: No 00:06:37.726 00:06:37.726 Running for 1 seconds... 00:06:37.726 00:06:37.726 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.726 ------------------------------------------------------------------------------------ 00:06:37.726 0,0 116512/s 462 MiB/s 0 0 00:06:37.726 ==================================================================================== 00:06:37.726 Total 116512/s 455 MiB/s 0 0' 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.726 04:02:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:37.726 04:02:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:37.726 04:02:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.726 04:02:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.726 04:02:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.726 04:02:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.726 04:02:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.726 04:02:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.726 04:02:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.726 04:02:39 -- accel/accel.sh@42 -- # jq -r . 00:06:37.726 [2024-11-26 04:02:39.091926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.726 [2024-11-26 04:02:39.092020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71216 ] 00:06:37.726 [2024-11-26 04:02:39.233656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.726 [2024-11-26 04:02:39.262821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.726 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.726 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.726 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.726 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.726 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=0x1 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=software 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=32 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=32 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=1 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val=No 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:37.727 04:02:39 -- accel/accel.sh@21 -- # val= 00:06:37.727 04:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # IFS=: 00:06:37.727 04:02:39 -- accel/accel.sh@20 -- # read -r var val 00:06:38.665 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.665 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.665 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.665 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.665 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.665 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.665 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.666 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.666 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.666 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.666 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.666 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.666 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.666 04:02:40 -- accel/accel.sh@21 -- # val= 00:06:38.666 04:02:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # IFS=: 00:06:38.666 04:02:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.666 ************************************ 00:06:38.666 END TEST accel_dif_generate_copy 00:06:38.666 ************************************ 00:06:38.666 04:02:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.666 04:02:40 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:38.666 04:02:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.666 00:06:38.666 real 0m2.651s 00:06:38.666 user 0m2.257s 00:06:38.666 sys 0m0.194s 00:06:38.666 04:02:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.666 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:38.924 04:02:40 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:38.924 04:02:40 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.924 04:02:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:38.924 04:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.924 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:38.924 ************************************ 00:06:38.924 START TEST accel_comp 00:06:38.924 ************************************ 00:06:38.924 04:02:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.924 04:02:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.924 04:02:40 -- accel/accel.sh@17 -- # local accel_module 00:06:38.924 04:02:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.925 04:02:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:38.925 04:02:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.925 04:02:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.925 04:02:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.925 04:02:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.925 04:02:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.925 04:02:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.925 04:02:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.925 04:02:40 -- accel/accel.sh@42 -- # jq -r . 00:06:38.925 [2024-11-26 04:02:40.470268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.925 [2024-11-26 04:02:40.470353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:06:38.925 [2024-11-26 04:02:40.607378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.925 [2024-11-26 04:02:40.635988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.301 04:02:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:40.301 00:06:40.301 SPDK Configuration: 00:06:40.301 Core mask: 0x1 00:06:40.301 00:06:40.301 Accel Perf Configuration: 00:06:40.301 Workload Type: compress 00:06:40.301 Transfer size: 4096 bytes 00:06:40.301 Vector count 1 00:06:40.301 Module: software 00:06:40.301 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.301 Queue depth: 32 00:06:40.301 Allocate depth: 32 00:06:40.301 # threads/core: 1 00:06:40.301 Run time: 1 seconds 00:06:40.301 Verify: No 00:06:40.301 00:06:40.301 Running for 1 seconds... 00:06:40.301 00:06:40.301 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.301 ------------------------------------------------------------------------------------ 00:06:40.301 0,0 63584/s 265 MiB/s 0 0 00:06:40.301 ==================================================================================== 00:06:40.301 Total 63584/s 248 MiB/s 0 0' 00:06:40.301 04:02:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.301 04:02:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.301 04:02:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.301 04:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.301 04:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.301 04:02:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.301 04:02:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.301 04:02:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.301 04:02:41 -- accel/accel.sh@42 -- # jq -r . 00:06:40.301 [2024-11-26 04:02:41.789860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.301 [2024-11-26 04:02:41.789968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71266 ] 00:06:40.301 [2024-11-26 04:02:41.936046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.301 [2024-11-26 04:02:41.963967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val=0x1 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:41 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=compress 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=software 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=32 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=32 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=1 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val=No 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.301 04:02:42 -- accel/accel.sh@21 -- # val= 00:06:40.301 04:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.301 04:02:42 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@21 -- # val= 00:06:41.676 04:02:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # IFS=: 00:06:41.676 04:02:43 -- accel/accel.sh@20 -- # read -r var val 00:06:41.676 04:02:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.676 ************************************ 00:06:41.676 END TEST accel_comp 00:06:41.676 ************************************ 00:06:41.676 04:02:43 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:41.676 04:02:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.676 00:06:41.676 real 0m2.656s 00:06:41.676 user 0m2.265s 00:06:41.676 sys 0m0.193s 00:06:41.676 04:02:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.676 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 04:02:43 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.676 04:02:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:41.676 04:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.676 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.676 ************************************ 00:06:41.676 START TEST accel_decomp 00:06:41.676 ************************************ 00:06:41.676 04:02:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.676 04:02:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.676 04:02:43 -- accel/accel.sh@17 -- # local accel_module 00:06:41.676 04:02:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.676 04:02:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.676 04:02:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.676 04:02:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.676 04:02:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.676 04:02:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.676 04:02:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.676 04:02:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.676 04:02:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.676 04:02:43 -- accel/accel.sh@42 -- # jq -r . 00:06:41.676 [2024-11-26 04:02:43.168833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.676 [2024-11-26 04:02:43.169055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:06:41.676 [2024-11-26 04:02:43.311413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.676 [2024-11-26 04:02:43.341148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.051 04:02:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:43.051 00:06:43.051 SPDK Configuration: 00:06:43.051 Core mask: 0x1 00:06:43.051 00:06:43.051 Accel Perf Configuration: 00:06:43.051 Workload Type: decompress 00:06:43.051 Transfer size: 4096 bytes 00:06:43.051 Vector count 1 00:06:43.051 Module: software 00:06:43.051 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.051 Queue depth: 32 00:06:43.051 Allocate depth: 32 00:06:43.051 # threads/core: 1 00:06:43.051 Run time: 1 seconds 00:06:43.051 Verify: Yes 00:06:43.051 00:06:43.051 Running for 1 seconds... 00:06:43.051 00:06:43.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.051 ------------------------------------------------------------------------------------ 00:06:43.051 0,0 83520/s 153 MiB/s 0 0 00:06:43.051 ==================================================================================== 00:06:43.051 Total 83520/s 326 MiB/s 0 0' 00:06:43.051 04:02:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.051 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.052 04:02:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.052 04:02:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.052 04:02:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.052 04:02:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.052 04:02:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.052 04:02:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.052 04:02:44 -- accel/accel.sh@42 -- # jq -r . 00:06:43.052 [2024-11-26 04:02:44.490587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.052 [2024-11-26 04:02:44.490763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71325 ] 00:06:43.052 [2024-11-26 04:02:44.628520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.052 [2024-11-26 04:02:44.657979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=0x1 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=decompress 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=software 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=32 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=32 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=1 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val=Yes 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:43.052 04:02:44 -- accel/accel.sh@21 -- # val= 00:06:43.052 04:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # IFS=: 00:06:43.052 04:02:44 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@21 -- # val= 00:06:44.428 04:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # IFS=: 00:06:44.428 04:02:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.428 04:02:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.428 04:02:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:44.428 04:02:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.428 00:06:44.428 real 0m2.658s 00:06:44.428 user 0m2.263s 00:06:44.428 sys 0m0.195s 00:06:44.428 04:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.428 04:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.428 ************************************ 00:06:44.428 END TEST accel_decomp 00:06:44.428 ************************************ 00:06:44.428 04:02:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.428 04:02:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:44.428 04:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.428 04:02:45 -- common/autotest_common.sh@10 -- # set +x 00:06:44.428 ************************************ 00:06:44.428 START TEST accel_decmop_full 00:06:44.428 ************************************ 00:06:44.428 04:02:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.428 04:02:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.428 04:02:45 -- accel/accel.sh@17 -- # local accel_module 00:06:44.428 04:02:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.428 04:02:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.428 04:02:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.428 04:02:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.428 04:02:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.428 04:02:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.428 04:02:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.428 04:02:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.428 04:02:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.428 04:02:45 -- accel/accel.sh@42 -- # jq -r . 00:06:44.428 [2024-11-26 04:02:45.882935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.428 [2024-11-26 04:02:45.883159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71355 ] 00:06:44.428 [2024-11-26 04:02:46.031306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.428 [2024-11-26 04:02:46.061732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.862 04:02:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:45.862 00:06:45.862 SPDK Configuration: 00:06:45.862 Core mask: 0x1 00:06:45.862 00:06:45.862 Accel Perf Configuration: 00:06:45.862 Workload Type: decompress 00:06:45.862 Transfer size: 111250 bytes 00:06:45.862 Vector count 1 00:06:45.862 Module: software 00:06:45.862 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.862 Queue depth: 32 00:06:45.862 Allocate depth: 32 00:06:45.862 # threads/core: 1 00:06:45.862 Run time: 1 seconds 00:06:45.862 Verify: Yes 00:06:45.862 00:06:45.862 Running for 1 seconds... 00:06:45.862 00:06:45.862 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.862 ------------------------------------------------------------------------------------ 00:06:45.862 0,0 5696/s 235 MiB/s 0 0 00:06:45.862 ==================================================================================== 00:06:45.862 Total 5696/s 604 MiB/s 0 0' 00:06:45.862 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.862 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.862 04:02:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.862 04:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.862 04:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.862 04:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.862 04:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.862 04:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.862 04:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.862 04:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.862 04:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.862 04:02:47 -- accel/accel.sh@42 -- # jq -r . 00:06:45.863 [2024-11-26 04:02:47.231229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.863 [2024-11-26 04:02:47.231340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:06:45.863 [2024-11-26 04:02:47.376144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.863 [2024-11-26 04:02:47.405793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=0x1 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=decompress 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=software 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=32 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=32 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=1 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val=Yes 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:45.863 04:02:47 -- accel/accel.sh@21 -- # val= 00:06:45.863 04:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # IFS=: 00:06:45.863 04:02:47 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@21 -- # val= 00:06:46.799 04:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.799 04:02:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.799 04:02:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.799 04:02:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.799 04:02:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.799 00:06:46.799 real 0m2.695s 00:06:46.799 user 0m2.276s 00:06:46.799 sys 0m0.214s 00:06:46.799 04:02:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.799 ************************************ 00:06:46.799 END TEST accel_decmop_full 00:06:46.799 ************************************ 00:06:46.799 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.058 04:02:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.058 04:02:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:47.058 04:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.058 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.058 ************************************ 00:06:47.058 START TEST accel_decomp_mcore 00:06:47.058 ************************************ 00:06:47.058 04:02:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.058 04:02:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.059 04:02:48 -- accel/accel.sh@17 -- # local accel_module 00:06:47.059 04:02:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.059 04:02:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.059 04:02:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.059 04:02:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.059 04:02:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.059 04:02:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.059 04:02:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.059 04:02:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.059 04:02:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.059 04:02:48 -- accel/accel.sh@42 -- # jq -r . 00:06:47.059 [2024-11-26 04:02:48.630201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.059 [2024-11-26 04:02:48.630309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71411 ] 00:06:47.059 [2024-11-26 04:02:48.776819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.059 [2024-11-26 04:02:48.808432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.059 [2024-11-26 04:02:48.808651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.059 [2024-11-26 04:02:48.808623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.059 [2024-11-26 04:02:48.808719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.439 04:02:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.439 00:06:48.439 SPDK Configuration: 00:06:48.439 Core mask: 0xf 00:06:48.439 00:06:48.439 Accel Perf Configuration: 00:06:48.439 Workload Type: decompress 00:06:48.439 Transfer size: 4096 bytes 00:06:48.439 Vector count 1 00:06:48.439 Module: software 00:06:48.439 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.439 Queue depth: 32 00:06:48.439 Allocate depth: 32 00:06:48.439 # threads/core: 1 00:06:48.439 Run time: 1 seconds 00:06:48.439 Verify: Yes 00:06:48.439 00:06:48.439 Running for 1 seconds... 00:06:48.439 00:06:48.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.439 ------------------------------------------------------------------------------------ 00:06:48.439 0,0 77312/s 142 MiB/s 0 0 00:06:48.439 3,0 57824/s 106 MiB/s 0 0 00:06:48.439 2,0 58112/s 107 MiB/s 0 0 00:06:48.439 1,0 58240/s 107 MiB/s 0 0 00:06:48.439 ==================================================================================== 00:06:48.439 Total 251488/s 982 MiB/s 0 0' 00:06:48.439 04:02:49 -- accel/accel.sh@20 -- # IFS=: 00:06:48.439 04:02:49 -- accel/accel.sh@20 -- # read -r var val 00:06:48.439 04:02:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.439 04:02:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.439 04:02:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.439 04:02:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.439 04:02:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.440 04:02:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.440 04:02:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.440 04:02:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.440 04:02:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.440 04:02:49 -- accel/accel.sh@42 -- # jq -r . 00:06:48.440 [2024-11-26 04:02:49.979226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.440 [2024-11-26 04:02:49.979327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71429 ] 00:06:48.440 [2024-11-26 04:02:50.127471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.440 [2024-11-26 04:02:50.160366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.440 [2024-11-26 04:02:50.160789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.440 [2024-11-26 04:02:50.161190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.440 [2024-11-26 04:02:50.161293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=0xf 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=decompress 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=software 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=32 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=32 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val=1 00:06:48.440 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.440 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.440 04:02:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.702 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.702 04:02:50 -- accel/accel.sh@21 -- # val=Yes 00:06:48.702 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.702 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.702 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:48.702 04:02:50 -- accel/accel.sh@21 -- # val= 00:06:48.702 04:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # IFS=: 00:06:48.702 04:02:50 -- accel/accel.sh@20 -- # read -r var val 00:06:49.647 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.647 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.647 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.647 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.647 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.647 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.647 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 04:02:51 -- accel/accel.sh@21 -- # val= 00:06:49.648 04:02:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # IFS=: 00:06:49.648 04:02:51 -- accel/accel.sh@20 -- # read -r var val 00:06:49.648 ************************************ 00:06:49.648 END TEST accel_decomp_mcore 00:06:49.648 ************************************ 00:06:49.648 04:02:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.648 04:02:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:49.648 04:02:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.648 00:06:49.648 real 0m2.727s 00:06:49.648 user 0m4.355s 00:06:49.648 sys 0m0.130s 00:06:49.648 04:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.648 04:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:49.648 04:02:51 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.648 04:02:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:49.648 04:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.648 04:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:49.648 ************************************ 00:06:49.648 START TEST accel_decomp_full_mcore 00:06:49.648 ************************************ 00:06:49.648 04:02:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.648 04:02:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.648 04:02:51 -- accel/accel.sh@17 -- # local accel_module 00:06:49.648 04:02:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.648 04:02:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:49.648 04:02:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.648 04:02:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.648 04:02:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.648 04:02:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.648 04:02:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.648 04:02:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.648 04:02:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.648 04:02:51 -- accel/accel.sh@42 -- # jq -r . 00:06:49.908 [2024-11-26 04:02:51.412857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.908 [2024-11-26 04:02:51.412963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71468 ] 00:06:49.908 [2024-11-26 04:02:51.559526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.908 [2024-11-26 04:02:51.591159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.908 [2024-11-26 04:02:51.591672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.908 [2024-11-26 04:02:51.591661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.908 [2024-11-26 04:02:51.591751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.283 04:02:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.283 00:06:51.283 SPDK Configuration: 00:06:51.283 Core mask: 0xf 00:06:51.283 00:06:51.283 Accel Perf Configuration: 00:06:51.283 Workload Type: decompress 00:06:51.283 Transfer size: 111250 bytes 00:06:51.283 Vector count 1 00:06:51.283 Module: software 00:06:51.283 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.283 Queue depth: 32 00:06:51.283 Allocate depth: 32 00:06:51.283 # threads/core: 1 00:06:51.283 Run time: 1 seconds 00:06:51.283 Verify: Yes 00:06:51.283 00:06:51.283 Running for 1 seconds... 00:06:51.283 00:06:51.283 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.283 ------------------------------------------------------------------------------------ 00:06:51.283 0,0 5728/s 236 MiB/s 0 0 00:06:51.283 3,0 4320/s 178 MiB/s 0 0 00:06:51.283 2,0 4352/s 179 MiB/s 0 0 00:06:51.283 1,0 4352/s 179 MiB/s 0 0 00:06:51.283 ==================================================================================== 00:06:51.283 Total 18752/s 1989 MiB/s 0 0' 00:06:51.283 04:02:52 -- accel/accel.sh@20 -- # IFS=: 00:06:51.283 04:02:52 -- accel/accel.sh@20 -- # read -r var val 00:06:51.283 04:02:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.283 04:02:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.283 04:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.283 04:02:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.283 04:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.283 04:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.283 04:02:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.283 04:02:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.283 04:02:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.283 04:02:52 -- accel/accel.sh@42 -- # jq -r . 00:06:51.283 [2024-11-26 04:02:52.773730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.284 [2024-11-26 04:02:52.773835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:06:51.284 [2024-11-26 04:02:52.922635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.284 [2024-11-26 04:02:52.965421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.284 [2024-11-26 04:02:52.965791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.284 [2024-11-26 04:02:52.966215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.284 [2024-11-26 04:02:52.966258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=0xf 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=decompress 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=software 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=32 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=32 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=1 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val=Yes 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:51.284 04:02:53 -- accel/accel.sh@21 -- # val= 00:06:51.284 04:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # IFS=: 00:06:51.284 04:02:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@21 -- # val= 00:06:52.666 04:02:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # IFS=: 00:06:52.666 04:02:54 -- accel/accel.sh@20 -- # read -r var val 00:06:52.666 04:02:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.666 04:02:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:52.666 04:02:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.666 00:06:52.666 real 0m2.832s 00:06:52.666 user 0m9.053s 00:06:52.666 sys 0m0.269s 00:06:52.666 04:02:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.666 04:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.666 ************************************ 00:06:52.666 END TEST accel_decomp_full_mcore 00:06:52.666 ************************************ 00:06:52.666 04:02:54 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.666 04:02:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:52.666 04:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.666 04:02:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.666 ************************************ 00:06:52.666 START TEST accel_decomp_mthread 00:06:52.666 ************************************ 00:06:52.666 04:02:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.666 04:02:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.666 04:02:54 -- accel/accel.sh@17 -- # local accel_module 00:06:52.666 04:02:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.666 04:02:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.666 04:02:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.666 04:02:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.666 04:02:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.666 04:02:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.666 04:02:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.666 04:02:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.666 04:02:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.666 04:02:54 -- accel/accel.sh@42 -- # jq -r . 00:06:52.666 [2024-11-26 04:02:54.315701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.666 [2024-11-26 04:02:54.315839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71524 ] 00:06:52.926 [2024-11-26 04:02:54.471495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.926 [2024-11-26 04:02:54.528790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.310 04:02:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.310 00:06:54.310 SPDK Configuration: 00:06:54.310 Core mask: 0x1 00:06:54.310 00:06:54.310 Accel Perf Configuration: 00:06:54.310 Workload Type: decompress 00:06:54.310 Transfer size: 4096 bytes 00:06:54.310 Vector count 1 00:06:54.310 Module: software 00:06:54.310 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.310 Queue depth: 32 00:06:54.310 Allocate depth: 32 00:06:54.310 # threads/core: 2 00:06:54.310 Run time: 1 seconds 00:06:54.310 Verify: Yes 00:06:54.310 00:06:54.310 Running for 1 seconds... 00:06:54.310 00:06:54.310 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.310 ------------------------------------------------------------------------------------ 00:06:54.310 0,1 30624/s 56 MiB/s 0 0 00:06:54.310 0,0 30464/s 56 MiB/s 0 0 00:06:54.310 ==================================================================================== 00:06:54.311 Total 61088/s 238 MiB/s 0 0' 00:06:54.311 04:02:55 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.311 04:02:55 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:54.311 04:02:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.311 04:02:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.311 04:02:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.311 04:02:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.311 04:02:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.311 04:02:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.311 04:02:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.311 04:02:55 -- accel/accel.sh@42 -- # jq -r . 00:06:54.311 [2024-11-26 04:02:55.769674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.311 [2024-11-26 04:02:55.769794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71545 ] 00:06:54.311 [2024-11-26 04:02:55.918777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.311 [2024-11-26 04:02:55.966996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=0x1 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=decompress 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=software 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=32 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=32 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=2 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val=Yes 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:54.311 04:02:56 -- accel/accel.sh@21 -- # val= 00:06:54.311 04:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # IFS=: 00:06:54.311 04:02:56 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@21 -- # val= 00:06:55.698 04:02:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # IFS=: 00:06:55.698 04:02:57 -- accel/accel.sh@20 -- # read -r var val 00:06:55.698 04:02:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.698 04:02:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.698 04:02:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.698 00:06:55.698 real 0m2.900s 00:06:55.698 user 0m2.389s 00:06:55.698 sys 0m0.303s 00:06:55.698 04:02:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.698 ************************************ 00:06:55.698 END TEST accel_decomp_mthread 00:06:55.698 ************************************ 00:06:55.698 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.698 04:02:57 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.698 04:02:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:55.698 04:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.698 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.698 ************************************ 00:06:55.698 START TEST accel_deomp_full_mthread 00:06:55.698 ************************************ 00:06:55.698 04:02:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.698 04:02:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.698 04:02:57 -- accel/accel.sh@17 -- # local accel_module 00:06:55.698 04:02:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.698 04:02:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.698 04:02:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.698 04:02:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.698 04:02:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.698 04:02:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.698 04:02:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.698 04:02:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.698 04:02:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.698 04:02:57 -- accel/accel.sh@42 -- # jq -r . 00:06:55.698 [2024-11-26 04:02:57.268137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.698 [2024-11-26 04:02:57.268731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71580 ] 00:06:55.698 [2024-11-26 04:02:57.417441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.959 [2024-11-26 04:02:57.469867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.345 04:02:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.345 00:06:57.345 SPDK Configuration: 00:06:57.345 Core mask: 0x1 00:06:57.345 00:06:57.345 Accel Perf Configuration: 00:06:57.345 Workload Type: decompress 00:06:57.345 Transfer size: 111250 bytes 00:06:57.345 Vector count 1 00:06:57.345 Module: software 00:06:57.345 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.345 Queue depth: 32 00:06:57.345 Allocate depth: 32 00:06:57.345 # threads/core: 2 00:06:57.345 Run time: 1 seconds 00:06:57.345 Verify: Yes 00:06:57.345 00:06:57.345 Running for 1 seconds... 00:06:57.345 00:06:57.345 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.345 ------------------------------------------------------------------------------------ 00:06:57.345 0,1 2176/s 89 MiB/s 0 0 00:06:57.345 0,0 2176/s 89 MiB/s 0 0 00:06:57.345 ==================================================================================== 00:06:57.345 Total 4352/s 461 MiB/s 0 0' 00:06:57.345 04:02:58 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:58 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.345 04:02:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.345 04:02:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.345 04:02:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.345 04:02:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.345 04:02:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.345 04:02:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.345 04:02:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.345 04:02:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.345 04:02:58 -- accel/accel.sh@42 -- # jq -r . 00:06:57.345 [2024-11-26 04:02:58.770987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.345 [2024-11-26 04:02:58.771108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71601 ] 00:06:57.345 [2024-11-26 04:02:58.926218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.345 [2024-11-26 04:02:58.992208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=0x1 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=decompress 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=software 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=32 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=32 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=2 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val=Yes 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 04:02:59 -- accel/accel.sh@21 -- # val= 00:06:57.345 04:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 04:02:59 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@21 -- # val= 00:06:58.746 04:03:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # IFS=: 00:06:58.746 04:03:00 -- accel/accel.sh@20 -- # read -r var val 00:06:58.746 04:03:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.746 04:03:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:58.746 04:03:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.746 00:06:58.746 real 0m3.022s 00:06:58.746 user 0m2.509s 00:06:58.746 sys 0m0.301s 00:06:58.746 04:03:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.746 ************************************ 00:06:58.746 END TEST accel_deomp_full_mthread 00:06:58.746 ************************************ 00:06:58.746 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.746 04:03:00 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:58.746 04:03:00 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.746 04:03:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:58.746 04:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.746 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.746 04:03:00 -- accel/accel.sh@129 -- # build_accel_config 00:06:58.746 04:03:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.746 04:03:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.746 04:03:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.746 04:03:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.746 04:03:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.746 04:03:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.746 04:03:00 -- accel/accel.sh@42 -- # jq -r . 00:06:58.746 ************************************ 00:06:58.746 START TEST accel_dif_functional_tests 00:06:58.746 ************************************ 00:06:58.746 04:03:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.746 [2024-11-26 04:03:00.397439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.746 [2024-11-26 04:03:00.397831] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71637 ] 00:06:59.008 [2024-11-26 04:03:00.551093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.008 [2024-11-26 04:03:00.602721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.008 [2024-11-26 04:03:00.603092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.008 [2024-11-26 04:03:00.603161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.008 00:06:59.008 00:06:59.008 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.008 http://cunit.sourceforge.net/ 00:06:59.008 00:06:59.008 00:06:59.008 Suite: accel_dif 00:06:59.008 Test: verify: DIF generated, GUARD check ...passed 00:06:59.008 Test: verify: DIF generated, APPTAG check ...passed 00:06:59.008 Test: verify: DIF generated, REFTAG check ...passed 00:06:59.008 Test: verify: DIF not generated, GUARD check ...passed 00:06:59.008 Test: verify: DIF not generated, APPTAG check ...passed 00:06:59.008 Test: verify: DIF not generated, REFTAG check ...passed 00:06:59.008 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:59.008 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-26 04:03:00.688393] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.008 [2024-11-26 04:03:00.688634] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.008 [2024-11-26 04:03:00.688721] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.008 [2024-11-26 04:03:00.688794] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.008 [2024-11-26 04:03:00.688840] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.008 [2024-11-26 04:03:00.688876] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.008 [2024-11-26 04:03:00.688971] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:59.008 passed 00:06:59.008 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:59.008 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:59.008 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:59.008 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:59.008 Test: generate copy: DIF generated, GUARD check ...passed 00:06:59.008 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:59.008 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:59.008 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:59.008 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-11-26 04:03:00.689554] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:59.008 passed 00:06:59.008 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:59.008 Test: generate copy: iovecs-len validate ...passed 00:06:59.008 Test: generate copy: buffer alignment validate ...[2024-11-26 04:03:00.690167] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:59.008 passed 00:06:59.008 00:06:59.008 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.008 suites 1 1 n/a 0 0 00:06:59.008 tests 20 20 20 0 0 00:06:59.008 asserts 204 204 204 0 n/a 00:06:59.008 00:06:59.008 Elapsed time = 0.005 seconds 00:06:59.269 00:06:59.269 real 0m0.578s 00:06:59.269 user 0m0.661s 00:06:59.269 sys 0m0.217s 00:06:59.269 04:03:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.269 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:59.269 ************************************ 00:06:59.269 END TEST accel_dif_functional_tests 00:06:59.269 ************************************ 00:06:59.269 00:06:59.269 real 0m58.348s 00:06:59.269 user 1m1.858s 00:06:59.269 sys 0m5.960s 00:06:59.269 ************************************ 00:06:59.269 END TEST accel 00:06:59.269 ************************************ 00:06:59.269 04:03:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.269 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:59.269 04:03:01 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:59.269 04:03:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.269 04:03:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.269 04:03:01 -- common/autotest_common.sh@10 -- # set +x 00:06:59.269 ************************************ 00:06:59.269 START TEST accel_rpc 00:06:59.269 ************************************ 00:06:59.269 04:03:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:59.593 * Looking for test storage... 00:06:59.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:59.593 04:03:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:59.593 04:03:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.593 04:03:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:59.593 04:03:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:59.593 04:03:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:59.593 04:03:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:59.593 04:03:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:59.593 04:03:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:59.593 04:03:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:59.593 04:03:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.593 04:03:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:59.593 04:03:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:59.593 04:03:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:59.593 04:03:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:59.593 04:03:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:59.593 04:03:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:59.593 04:03:01 -- scripts/common.sh@344 -- # : 1 00:06:59.593 04:03:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:59.593 04:03:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.593 04:03:01 -- scripts/common.sh@364 -- # decimal 1 00:06:59.593 04:03:01 -- scripts/common.sh@352 -- # local d=1 00:06:59.593 04:03:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.593 04:03:01 -- scripts/common.sh@354 -- # echo 1 00:06:59.593 04:03:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:59.593 04:03:01 -- scripts/common.sh@365 -- # decimal 2 00:06:59.593 04:03:01 -- scripts/common.sh@352 -- # local d=2 00:06:59.593 04:03:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.593 04:03:01 -- scripts/common.sh@354 -- # echo 2 00:06:59.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.593 04:03:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:59.593 04:03:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:59.593 04:03:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:59.593 04:03:01 -- scripts/common.sh@367 -- # return 0 00:06:59.593 04:03:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.593 04:03:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:59.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.593 --rc genhtml_branch_coverage=1 00:06:59.593 --rc genhtml_function_coverage=1 00:06:59.593 --rc genhtml_legend=1 00:06:59.593 --rc geninfo_all_blocks=1 00:06:59.593 --rc geninfo_unexecuted_blocks=1 00:06:59.593 00:06:59.593 ' 00:06:59.593 04:03:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:59.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.593 --rc genhtml_branch_coverage=1 00:06:59.593 --rc genhtml_function_coverage=1 00:06:59.593 --rc genhtml_legend=1 00:06:59.593 --rc geninfo_all_blocks=1 00:06:59.593 --rc geninfo_unexecuted_blocks=1 00:06:59.593 00:06:59.593 ' 00:06:59.593 04:03:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:59.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.593 --rc genhtml_branch_coverage=1 00:06:59.593 --rc genhtml_function_coverage=1 00:06:59.594 --rc genhtml_legend=1 00:06:59.594 --rc geninfo_all_blocks=1 00:06:59.594 --rc geninfo_unexecuted_blocks=1 00:06:59.594 00:06:59.594 ' 00:06:59.594 04:03:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:59.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.594 --rc genhtml_branch_coverage=1 00:06:59.594 --rc genhtml_function_coverage=1 00:06:59.594 --rc genhtml_legend=1 00:06:59.594 --rc geninfo_all_blocks=1 00:06:59.594 --rc geninfo_unexecuted_blocks=1 00:06:59.594 00:06:59.594 ' 00:06:59.594 04:03:01 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.594 04:03:01 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71715 00:06:59.594 04:03:01 -- accel/accel_rpc.sh@15 -- # waitforlisten 71715 00:06:59.594 04:03:01 -- common/autotest_common.sh@829 -- # '[' -z 71715 ']' 00:06:59.594 04:03:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.594 04:03:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.594 04:03:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.594 04:03:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.594 04:03:01 -- common/autotest_common.sh@10 -- # set +x 00:06:59.594 04:03:01 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:59.594 [2024-11-26 04:03:01.259353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.594 [2024-11-26 04:03:01.259539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:06:59.869 [2024-11-26 04:03:01.413712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.869 [2024-11-26 04:03:01.465028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.869 [2024-11-26 04:03:01.465489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.441 04:03:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.441 04:03:02 -- common/autotest_common.sh@862 -- # return 0 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:00.441 04:03:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.441 04:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.441 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.441 ************************************ 00:07:00.441 START TEST accel_assign_opcode 00:07:00.441 ************************************ 00:07:00.441 04:03:02 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:00.441 04:03:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.441 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.441 [2024-11-26 04:03:02.094301] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:00.441 04:03:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.441 04:03:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:00.441 04:03:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.441 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.441 [2024-11-26 04:03:02.102286] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:00.441 04:03:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.442 04:03:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:00.442 04:03:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.442 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.703 04:03:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.703 04:03:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:00.703 04:03:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.703 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.703 04:03:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:00.703 04:03:02 -- accel/accel_rpc.sh@42 -- # grep software 00:07:00.703 04:03:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.703 software 00:07:00.703 ************************************ 00:07:00.703 END TEST accel_assign_opcode 00:07:00.703 ************************************ 00:07:00.703 00:07:00.703 real 0m0.238s 00:07:00.704 user 0m0.032s 00:07:00.704 sys 0m0.010s 00:07:00.704 04:03:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.704 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.704 04:03:02 -- accel/accel_rpc.sh@55 -- # killprocess 71715 00:07:00.704 04:03:02 -- common/autotest_common.sh@936 -- # '[' -z 71715 ']' 00:07:00.704 04:03:02 -- common/autotest_common.sh@940 -- # kill -0 71715 00:07:00.704 04:03:02 -- common/autotest_common.sh@941 -- # uname 00:07:00.704 04:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.704 04:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71715 00:07:00.704 killing process with pid 71715 00:07:00.704 04:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.704 04:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.704 04:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71715' 00:07:00.704 04:03:02 -- common/autotest_common.sh@955 -- # kill 71715 00:07:00.704 04:03:02 -- common/autotest_common.sh@960 -- # wait 71715 00:07:00.966 ************************************ 00:07:00.966 END TEST accel_rpc 00:07:00.966 ************************************ 00:07:00.966 00:07:00.966 real 0m1.673s 00:07:00.966 user 0m1.572s 00:07:00.966 sys 0m0.483s 00:07:00.966 04:03:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.966 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.229 04:03:02 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.229 04:03:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.229 04:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.229 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.229 ************************************ 00:07:01.229 START TEST app_cmdline 00:07:01.229 ************************************ 00:07:01.229 04:03:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.229 * Looking for test storage... 00:07:01.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.229 04:03:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.229 04:03:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.229 04:03:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.229 04:03:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.229 04:03:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:01.229 04:03:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:01.229 04:03:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:01.229 04:03:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:01.229 04:03:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:01.229 04:03:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.229 04:03:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:01.230 04:03:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:01.230 04:03:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:01.230 04:03:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:01.230 04:03:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:01.230 04:03:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:01.230 04:03:02 -- scripts/common.sh@344 -- # : 1 00:07:01.230 04:03:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:01.230 04:03:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.230 04:03:02 -- scripts/common.sh@364 -- # decimal 1 00:07:01.230 04:03:02 -- scripts/common.sh@352 -- # local d=1 00:07:01.230 04:03:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.230 04:03:02 -- scripts/common.sh@354 -- # echo 1 00:07:01.230 04:03:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:01.230 04:03:02 -- scripts/common.sh@365 -- # decimal 2 00:07:01.230 04:03:02 -- scripts/common.sh@352 -- # local d=2 00:07:01.230 04:03:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.230 04:03:02 -- scripts/common.sh@354 -- # echo 2 00:07:01.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.230 04:03:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:01.230 04:03:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:01.230 04:03:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:01.230 04:03:02 -- scripts/common.sh@367 -- # return 0 00:07:01.230 04:03:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.230 04:03:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.230 --rc genhtml_branch_coverage=1 00:07:01.230 --rc genhtml_function_coverage=1 00:07:01.230 --rc genhtml_legend=1 00:07:01.230 --rc geninfo_all_blocks=1 00:07:01.230 --rc geninfo_unexecuted_blocks=1 00:07:01.230 00:07:01.230 ' 00:07:01.230 04:03:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.230 --rc genhtml_branch_coverage=1 00:07:01.230 --rc genhtml_function_coverage=1 00:07:01.230 --rc genhtml_legend=1 00:07:01.230 --rc geninfo_all_blocks=1 00:07:01.230 --rc geninfo_unexecuted_blocks=1 00:07:01.230 00:07:01.230 ' 00:07:01.230 04:03:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.230 --rc genhtml_branch_coverage=1 00:07:01.230 --rc genhtml_function_coverage=1 00:07:01.230 --rc genhtml_legend=1 00:07:01.230 --rc geninfo_all_blocks=1 00:07:01.230 --rc geninfo_unexecuted_blocks=1 00:07:01.230 00:07:01.230 ' 00:07:01.230 04:03:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.230 --rc genhtml_branch_coverage=1 00:07:01.230 --rc genhtml_function_coverage=1 00:07:01.230 --rc genhtml_legend=1 00:07:01.230 --rc geninfo_all_blocks=1 00:07:01.230 --rc geninfo_unexecuted_blocks=1 00:07:01.230 00:07:01.230 ' 00:07:01.230 04:03:02 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.230 04:03:02 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71811 00:07:01.230 04:03:02 -- app/cmdline.sh@18 -- # waitforlisten 71811 00:07:01.230 04:03:02 -- common/autotest_common.sh@829 -- # '[' -z 71811 ']' 00:07:01.230 04:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.230 04:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.230 04:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.230 04:03:02 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.230 04:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.230 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:07:01.230 [2024-11-26 04:03:02.951167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.230 [2024-11-26 04:03:02.951284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71811 ] 00:07:01.491 [2024-11-26 04:03:03.107328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.491 [2024-11-26 04:03:03.139014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.491 [2024-11-26 04:03:03.139209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.064 04:03:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.064 04:03:03 -- common/autotest_common.sh@862 -- # return 0 00:07:02.064 04:03:03 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:02.325 { 00:07:02.325 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:02.325 "fields": { 00:07:02.325 "major": 24, 00:07:02.325 "minor": 1, 00:07:02.325 "patch": 1, 00:07:02.325 "suffix": "-pre", 00:07:02.325 "commit": "c13c99a5e" 00:07:02.325 } 00:07:02.325 } 00:07:02.325 04:03:03 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.325 04:03:03 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.325 04:03:03 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.325 04:03:03 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.325 04:03:03 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.325 04:03:03 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.325 04:03:03 -- app/cmdline.sh@26 -- # sort 00:07:02.325 04:03:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.325 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:07:02.325 04:03:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.325 04:03:03 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.325 04:03:03 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.325 04:03:03 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.325 04:03:03 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.325 04:03:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.325 04:03:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.326 04:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.326 04:03:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.326 04:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.326 04:03:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.326 04:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.326 04:03:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.326 04:03:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.326 04:03:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.587 request: 00:07:02.587 { 00:07:02.587 "method": "env_dpdk_get_mem_stats", 00:07:02.587 "req_id": 1 00:07:02.587 } 00:07:02.587 Got JSON-RPC error response 00:07:02.587 response: 00:07:02.587 { 00:07:02.587 "code": -32601, 00:07:02.587 "message": "Method not found" 00:07:02.587 } 00:07:02.587 04:03:04 -- common/autotest_common.sh@653 -- # es=1 00:07:02.587 04:03:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.587 04:03:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.587 04:03:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.587 04:03:04 -- app/cmdline.sh@1 -- # killprocess 71811 00:07:02.587 04:03:04 -- common/autotest_common.sh@936 -- # '[' -z 71811 ']' 00:07:02.587 04:03:04 -- common/autotest_common.sh@940 -- # kill -0 71811 00:07:02.587 04:03:04 -- common/autotest_common.sh@941 -- # uname 00:07:02.587 04:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.587 04:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71811 00:07:02.587 killing process with pid 71811 00:07:02.587 04:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.587 04:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.587 04:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71811' 00:07:02.587 04:03:04 -- common/autotest_common.sh@955 -- # kill 71811 00:07:02.587 04:03:04 -- common/autotest_common.sh@960 -- # wait 71811 00:07:02.847 ************************************ 00:07:02.847 END TEST app_cmdline 00:07:02.847 ************************************ 00:07:02.847 00:07:02.847 real 0m1.714s 00:07:02.847 user 0m1.999s 00:07:02.847 sys 0m0.402s 00:07:02.847 04:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.847 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:02.847 04:03:04 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.847 04:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.847 04:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.847 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:02.847 ************************************ 00:07:02.847 START TEST version 00:07:02.847 ************************************ 00:07:02.847 04:03:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:02.847 * Looking for test storage... 00:07:02.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.847 04:03:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:02.847 04:03:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:02.847 04:03:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:03.107 04:03:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:03.107 04:03:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:03.107 04:03:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:03.107 04:03:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:03.107 04:03:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:03.108 04:03:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:03.108 04:03:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.108 04:03:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:03.108 04:03:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:03.108 04:03:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:03.108 04:03:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:03.108 04:03:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:03.108 04:03:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:03.108 04:03:04 -- scripts/common.sh@344 -- # : 1 00:07:03.108 04:03:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:03.108 04:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.108 04:03:04 -- scripts/common.sh@364 -- # decimal 1 00:07:03.108 04:03:04 -- scripts/common.sh@352 -- # local d=1 00:07:03.108 04:03:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.108 04:03:04 -- scripts/common.sh@354 -- # echo 1 00:07:03.108 04:03:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:03.108 04:03:04 -- scripts/common.sh@365 -- # decimal 2 00:07:03.108 04:03:04 -- scripts/common.sh@352 -- # local d=2 00:07:03.108 04:03:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.108 04:03:04 -- scripts/common.sh@354 -- # echo 2 00:07:03.108 04:03:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:03.108 04:03:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:03.108 04:03:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:03.108 04:03:04 -- scripts/common.sh@367 -- # return 0 00:07:03.108 04:03:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.108 04:03:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:03.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.108 --rc genhtml_branch_coverage=1 00:07:03.108 --rc genhtml_function_coverage=1 00:07:03.108 --rc genhtml_legend=1 00:07:03.108 --rc geninfo_all_blocks=1 00:07:03.108 --rc geninfo_unexecuted_blocks=1 00:07:03.108 00:07:03.108 ' 00:07:03.108 04:03:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:03.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.108 --rc genhtml_branch_coverage=1 00:07:03.108 --rc genhtml_function_coverage=1 00:07:03.108 --rc genhtml_legend=1 00:07:03.108 --rc geninfo_all_blocks=1 00:07:03.108 --rc geninfo_unexecuted_blocks=1 00:07:03.108 00:07:03.108 ' 00:07:03.108 04:03:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:03.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.108 --rc genhtml_branch_coverage=1 00:07:03.108 --rc genhtml_function_coverage=1 00:07:03.108 --rc genhtml_legend=1 00:07:03.108 --rc geninfo_all_blocks=1 00:07:03.108 --rc geninfo_unexecuted_blocks=1 00:07:03.108 00:07:03.108 ' 00:07:03.108 04:03:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:03.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.108 --rc genhtml_branch_coverage=1 00:07:03.108 --rc genhtml_function_coverage=1 00:07:03.108 --rc genhtml_legend=1 00:07:03.108 --rc geninfo_all_blocks=1 00:07:03.108 --rc geninfo_unexecuted_blocks=1 00:07:03.108 00:07:03.108 ' 00:07:03.108 04:03:04 -- app/version.sh@17 -- # get_header_version major 00:07:03.108 04:03:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.108 04:03:04 -- app/version.sh@14 -- # tr -d '"' 00:07:03.108 04:03:04 -- app/version.sh@14 -- # cut -f2 00:07:03.108 04:03:04 -- app/version.sh@17 -- # major=24 00:07:03.108 04:03:04 -- app/version.sh@18 -- # get_header_version minor 00:07:03.108 04:03:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.108 04:03:04 -- app/version.sh@14 -- # cut -f2 00:07:03.108 04:03:04 -- app/version.sh@14 -- # tr -d '"' 00:07:03.108 04:03:04 -- app/version.sh@18 -- # minor=1 00:07:03.108 04:03:04 -- app/version.sh@19 -- # get_header_version patch 00:07:03.108 04:03:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.108 04:03:04 -- app/version.sh@14 -- # cut -f2 00:07:03.108 04:03:04 -- app/version.sh@14 -- # tr -d '"' 00:07:03.108 04:03:04 -- app/version.sh@19 -- # patch=1 00:07:03.108 04:03:04 -- app/version.sh@20 -- # get_header_version suffix 00:07:03.108 04:03:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.108 04:03:04 -- app/version.sh@14 -- # cut -f2 00:07:03.108 04:03:04 -- app/version.sh@14 -- # tr -d '"' 00:07:03.108 04:03:04 -- app/version.sh@20 -- # suffix=-pre 00:07:03.108 04:03:04 -- app/version.sh@22 -- # version=24.1 00:07:03.108 04:03:04 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.108 04:03:04 -- app/version.sh@25 -- # version=24.1.1 00:07:03.108 04:03:04 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:03.108 04:03:04 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:03.108 04:03:04 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.108 04:03:04 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:03.108 04:03:04 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:03.108 ************************************ 00:07:03.108 END TEST version 00:07:03.108 ************************************ 00:07:03.108 00:07:03.108 real 0m0.202s 00:07:03.108 user 0m0.118s 00:07:03.108 sys 0m0.105s 00:07:03.108 04:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.108 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 04:03:04 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:03.108 04:03:04 -- spdk/autotest.sh@191 -- # uname -s 00:07:03.108 04:03:04 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:03.108 04:03:04 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:03.108 04:03:04 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:03.108 04:03:04 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:03.108 04:03:04 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.108 04:03:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:03.108 04:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.108 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.108 ************************************ 00:07:03.108 START TEST blockdev_nvme 00:07:03.108 ************************************ 00:07:03.108 04:03:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.108 * Looking for test storage... 00:07:03.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.108 04:03:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:03.369 04:03:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:03.369 04:03:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:03.369 04:03:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:03.369 04:03:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:03.369 04:03:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:03.369 04:03:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:03.369 04:03:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:03.369 04:03:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:03.370 04:03:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.370 04:03:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:03.370 04:03:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:03.370 04:03:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:03.370 04:03:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:03.370 04:03:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:03.370 04:03:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:03.370 04:03:04 -- scripts/common.sh@344 -- # : 1 00:07:03.370 04:03:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:03.370 04:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.370 04:03:04 -- scripts/common.sh@364 -- # decimal 1 00:07:03.370 04:03:04 -- scripts/common.sh@352 -- # local d=1 00:07:03.370 04:03:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.370 04:03:04 -- scripts/common.sh@354 -- # echo 1 00:07:03.370 04:03:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:03.370 04:03:04 -- scripts/common.sh@365 -- # decimal 2 00:07:03.370 04:03:04 -- scripts/common.sh@352 -- # local d=2 00:07:03.370 04:03:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.370 04:03:04 -- scripts/common.sh@354 -- # echo 2 00:07:03.370 04:03:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:03.370 04:03:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:03.370 04:03:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:03.370 04:03:04 -- scripts/common.sh@367 -- # return 0 00:07:03.370 04:03:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.370 04:03:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.370 --rc genhtml_branch_coverage=1 00:07:03.370 --rc genhtml_function_coverage=1 00:07:03.370 --rc genhtml_legend=1 00:07:03.370 --rc geninfo_all_blocks=1 00:07:03.370 --rc geninfo_unexecuted_blocks=1 00:07:03.370 00:07:03.370 ' 00:07:03.370 04:03:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.370 --rc genhtml_branch_coverage=1 00:07:03.370 --rc genhtml_function_coverage=1 00:07:03.370 --rc genhtml_legend=1 00:07:03.370 --rc geninfo_all_blocks=1 00:07:03.370 --rc geninfo_unexecuted_blocks=1 00:07:03.370 00:07:03.370 ' 00:07:03.370 04:03:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.370 --rc genhtml_branch_coverage=1 00:07:03.370 --rc genhtml_function_coverage=1 00:07:03.370 --rc genhtml_legend=1 00:07:03.370 --rc geninfo_all_blocks=1 00:07:03.370 --rc geninfo_unexecuted_blocks=1 00:07:03.370 00:07:03.370 ' 00:07:03.370 04:03:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:03.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.370 --rc genhtml_branch_coverage=1 00:07:03.370 --rc genhtml_function_coverage=1 00:07:03.370 --rc genhtml_legend=1 00:07:03.370 --rc geninfo_all_blocks=1 00:07:03.370 --rc geninfo_unexecuted_blocks=1 00:07:03.370 00:07:03.370 ' 00:07:03.370 04:03:04 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.370 04:03:04 -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.370 04:03:04 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:03.370 04:03:04 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.370 04:03:04 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:03.370 04:03:04 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:03.370 04:03:04 -- bdev/blockdev.sh@18 -- # : 00:07:03.370 04:03:04 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:03.370 04:03:04 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:03.370 04:03:04 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:03.370 04:03:04 -- bdev/blockdev.sh@672 -- # uname -s 00:07:03.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.370 04:03:04 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:03.370 04:03:04 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:03.370 04:03:04 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:03.370 04:03:04 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:03.370 04:03:04 -- bdev/blockdev.sh@682 -- # dek= 00:07:03.370 04:03:04 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:03.370 04:03:04 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:03.370 04:03:04 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:03.370 04:03:04 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:03.370 04:03:04 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:03.370 04:03:04 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:03.370 04:03:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=71970 00:07:03.370 04:03:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.370 04:03:04 -- bdev/blockdev.sh@47 -- # waitforlisten 71970 00:07:03.370 04:03:04 -- common/autotest_common.sh@829 -- # '[' -z 71970 ']' 00:07:03.370 04:03:04 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.370 04:03:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.370 04:03:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.370 04:03:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.370 04:03:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.370 04:03:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 [2024-11-26 04:03:05.021552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.370 [2024-11-26 04:03:05.021668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71970 ] 00:07:03.632 [2024-11-26 04:03:05.168788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.632 [2024-11-26 04:03:05.200631] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.632 [2024-11-26 04:03:05.200815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.205 04:03:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.205 04:03:05 -- common/autotest_common.sh@862 -- # return 0 00:07:04.205 04:03:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:04.205 04:03:05 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:04.205 04:03:05 -- bdev/blockdev.sh@79 -- # local json 00:07:04.205 04:03:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:04.205 04:03:05 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:04.205 04:03:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:04.205 04:03:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.205 04:03:05 -- common/autotest_common.sh@10 -- # set +x 00:07:04.466 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.466 04:03:06 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:04.466 04:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.466 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.466 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.466 04:03:06 -- bdev/blockdev.sh@738 -- # cat 00:07:04.466 04:03:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:04.466 04:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.466 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.467 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.467 04:03:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:04.467 04:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.467 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.467 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.467 04:03:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:04.467 04:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.467 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.467 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.729 04:03:06 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:04.729 04:03:06 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:04.729 04:03:06 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:04.729 04:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.729 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.729 04:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.729 04:03:06 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:04.730 04:03:06 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "03b57b02-d1a2-41be-a980-f277dcdef86d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03b57b02-d1a2-41be-a980-f277dcdef86d",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "852b8d14-4041-4170-8af2-baf79ecbdf2a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "852b8d14-4041-4170-8af2-baf79ecbdf2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9367a711-ac65-4562-b1da-bbd2107f27f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9367a711-ac65-4562-b1da-bbd2107f27f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "31ecce21-cf4b-4400-9e9f-6a5f6aecdd60"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31ecce21-cf4b-4400-9e9f-6a5f6aecdd60",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8bbf8a23-bce0-4f3e-8078-106a98bd91ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8bbf8a23-bce0-4f3e-8078-106a98bd91ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1a4f08f0-8f60-4c67-8231-f3a95b5601e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1a4f08f0-8f60-4c67-8231-f3a95b5601e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:04.730 04:03:06 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:04.730 04:03:06 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:04.730 04:03:06 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:04.730 04:03:06 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:04.730 04:03:06 -- bdev/blockdev.sh@752 -- # killprocess 71970 00:07:04.730 04:03:06 -- common/autotest_common.sh@936 -- # '[' -z 71970 ']' 00:07:04.730 04:03:06 -- common/autotest_common.sh@940 -- # kill -0 71970 00:07:04.730 04:03:06 -- common/autotest_common.sh@941 -- # uname 00:07:04.730 04:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.730 04:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71970 00:07:04.730 04:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.730 killing process with pid 71970 00:07:04.730 04:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.730 04:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71970' 00:07:04.730 04:03:06 -- common/autotest_common.sh@955 -- # kill 71970 00:07:04.730 04:03:06 -- common/autotest_common.sh@960 -- # wait 71970 00:07:04.991 04:03:06 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.991 04:03:06 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:04.991 04:03:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:04.991 04:03:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.991 04:03:06 -- common/autotest_common.sh@10 -- # set +x 00:07:04.991 ************************************ 00:07:04.991 START TEST bdev_hello_world 00:07:04.991 ************************************ 00:07:04.991 04:03:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:04.991 [2024-11-26 04:03:06.657244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.991 [2024-11-26 04:03:06.657452] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:07:05.255 [2024-11-26 04:03:06.802766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.255 [2024-11-26 04:03:06.834284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.515 [2024-11-26 04:03:07.184101] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:05.515 [2024-11-26 04:03:07.184277] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:05.515 [2024-11-26 04:03:07.184309] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:05.515 [2024-11-26 04:03:07.186300] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:05.515 [2024-11-26 04:03:07.187016] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:05.515 [2024-11-26 04:03:07.187044] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:05.515 [2024-11-26 04:03:07.187522] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:05.515 00:07:05.515 [2024-11-26 04:03:07.187544] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:05.776 00:07:05.776 real 0m0.746s 00:07:05.776 user 0m0.485s 00:07:05.776 sys 0m0.156s 00:07:05.776 ************************************ 00:07:05.776 END TEST bdev_hello_world 00:07:05.776 ************************************ 00:07:05.776 04:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.776 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:05.776 04:03:07 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:05.776 04:03:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.776 04:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.776 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:05.776 ************************************ 00:07:05.776 START TEST bdev_bounds 00:07:05.776 ************************************ 00:07:05.776 04:03:07 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:05.776 Process bdevio pid: 72068 00:07:05.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.776 04:03:07 -- bdev/blockdev.sh@288 -- # bdevio_pid=72068 00:07:05.776 04:03:07 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.777 04:03:07 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 72068' 00:07:05.777 04:03:07 -- bdev/blockdev.sh@291 -- # waitforlisten 72068 00:07:05.777 04:03:07 -- common/autotest_common.sh@829 -- # '[' -z 72068 ']' 00:07:05.777 04:03:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.777 04:03:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.777 04:03:07 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.777 04:03:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.777 04:03:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.777 04:03:07 -- common/autotest_common.sh@10 -- # set +x 00:07:05.777 [2024-11-26 04:03:07.457463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.777 [2024-11-26 04:03:07.457600] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72068 ] 00:07:06.038 [2024-11-26 04:03:07.603922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.038 [2024-11-26 04:03:07.637824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.038 [2024-11-26 04:03:07.638428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.038 [2024-11-26 04:03:07.638457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.611 04:03:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.611 04:03:08 -- common/autotest_common.sh@862 -- # return 0 00:07:06.611 04:03:08 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:06.611 I/O targets: 00:07:06.611 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:06.611 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:06.611 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.611 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.611 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:06.611 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:06.611 00:07:06.611 00:07:06.611 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.611 http://cunit.sourceforge.net/ 00:07:06.611 00:07:06.611 00:07:06.611 Suite: bdevio tests on: Nvme3n1 00:07:06.611 Test: blockdev write read block ...passed 00:07:06.873 Test: blockdev write zeroes read block ...passed 00:07:06.873 Test: blockdev write zeroes read no split ...passed 00:07:06.873 Test: blockdev write zeroes read split ...passed 00:07:06.873 Test: blockdev write zeroes read split partial ...passed 00:07:06.873 Test: blockdev reset ...[2024-11-26 04:03:08.387456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:06.873 passed 00:07:06.873 Test: blockdev write read 8 blocks ...[2024-11-26 04:03:08.391195] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:06.873 passed 00:07:06.873 Test: blockdev write read size > 128k ...passed 00:07:06.873 Test: blockdev write read invalid size ...passed 00:07:06.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.873 Test: blockdev write read max offset ...passed 00:07:06.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.873 Test: blockdev writev readv 8 blocks ...passed 00:07:06.873 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.873 Test: blockdev writev readv block ...passed 00:07:06.873 Test: blockdev writev readv size > 128k ...passed 00:07:06.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.873 Test: blockdev comparev and writev ...[2024-11-26 04:03:08.408907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c400e000 len:0x1000 00:07:06.873 [2024-11-26 04:03:08.408951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.873 passed 00:07:06.873 Test: blockdev nvme passthru rw ...passed 00:07:06.873 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.873 Test: blockdev nvme admin passthru ...[2024-11-26 04:03:08.411141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.873 [2024-11-26 04:03:08.411174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.873 passed 00:07:06.873 Test: blockdev copy ...passed 00:07:06.873 Suite: bdevio tests on: Nvme2n3 00:07:06.873 Test: blockdev write read block ...passed 00:07:06.873 Test: blockdev write zeroes read block ...passed 00:07:06.873 Test: blockdev write zeroes read no split ...passed 00:07:06.873 Test: blockdev write zeroes read split ...passed 00:07:06.873 Test: blockdev write zeroes read split partial ...passed 00:07:06.873 Test: blockdev reset ...[2024-11-26 04:03:08.437675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:06.873 [2024-11-26 04:03:08.440654] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:06.873 passed 00:07:06.873 Test: blockdev write read 8 blocks ...passed 00:07:06.873 Test: blockdev write read size > 128k ...passed 00:07:06.873 Test: blockdev write read invalid size ...passed 00:07:06.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.873 Test: blockdev write read max offset ...passed 00:07:06.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.873 Test: blockdev writev readv 8 blocks ...passed 00:07:06.873 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.873 Test: blockdev writev readv block ...passed 00:07:06.873 Test: blockdev writev readv size > 128k ...passed 00:07:06.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.873 Test: blockdev comparev and writev ...[2024-11-26 04:03:08.457039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4008000 len:0x1000 00:07:06.873 [2024-11-26 04:03:08.457147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.873 passed 00:07:06.873 Test: blockdev nvme passthru rw ...passed 00:07:06.873 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:03:08.460417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:06.873 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:06.873 [2024-11-26 04:03:08.460720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.873 passed 00:07:06.873 Test: blockdev copy ...passed 00:07:06.873 Suite: bdevio tests on: Nvme2n2 00:07:06.873 Test: blockdev write read block ...passed 00:07:06.873 Test: blockdev write zeroes read block ...passed 00:07:06.873 Test: blockdev write zeroes read no split ...passed 00:07:06.873 Test: blockdev write zeroes read split ...passed 00:07:06.873 Test: blockdev write zeroes read split partial ...passed 00:07:06.873 Test: blockdev reset ...[2024-11-26 04:03:08.487052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:06.873 [2024-11-26 04:03:08.492194] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:06.873 passed 00:07:06.873 Test: blockdev write read 8 blocks ...passed 00:07:06.873 Test: blockdev write read size > 128k ...passed 00:07:06.873 Test: blockdev write read invalid size ...passed 00:07:06.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.873 Test: blockdev write read max offset ...passed 00:07:06.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.873 Test: blockdev writev readv 8 blocks ...passed 00:07:06.873 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.873 Test: blockdev writev readv block ...passed 00:07:06.873 Test: blockdev writev readv size > 128k ...passed 00:07:06.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.873 Test: blockdev comparev and writev ...[2024-11-26 04:03:08.508056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4004000 len:0x1000 00:07:06.873 [2024-11-26 04:03:08.508171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.873 passed 00:07:06.873 Test: blockdev nvme passthru rw ...passed 00:07:06.873 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.874 Test: blockdev nvme admin passthru ...[2024-11-26 04:03:08.510835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.874 [2024-11-26 04:03:08.510867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.874 passed 00:07:06.874 Test: blockdev copy ...passed 00:07:06.874 Suite: bdevio tests on: Nvme2n1 00:07:06.874 Test: blockdev write read block ...passed 00:07:06.874 Test: blockdev write zeroes read block ...passed 00:07:06.874 Test: blockdev write zeroes read no split ...passed 00:07:06.874 Test: blockdev write zeroes read split ...passed 00:07:06.874 Test: blockdev write zeroes read split partial ...passed 00:07:06.874 Test: blockdev reset ...[2024-11-26 04:03:08.535496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:06.874 passed 00:07:06.874 Test: blockdev write read 8 blocks ...[2024-11-26 04:03:08.537280] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:06.874 passed 00:07:06.874 Test: blockdev write read size > 128k ...passed 00:07:06.874 Test: blockdev write read invalid size ...passed 00:07:06.874 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.874 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.874 Test: blockdev write read max offset ...passed 00:07:06.874 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.874 Test: blockdev writev readv 8 blocks ...passed 00:07:06.874 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.874 Test: blockdev writev readv block ...passed 00:07:06.874 Test: blockdev writev readv size > 128k ...passed 00:07:06.874 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.874 Test: blockdev comparev and writev ...[2024-11-26 04:03:08.553553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4004000 len:0x1000 00:07:06.874 [2024-11-26 04:03:08.553666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.874 passed 00:07:06.874 Test: blockdev nvme passthru rw ...passed 00:07:06.874 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.874 Test: blockdev nvme admin passthru ...[2024-11-26 04:03:08.556416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.874 [2024-11-26 04:03:08.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.874 passed 00:07:06.874 Test: blockdev copy ...passed 00:07:06.874 Suite: bdevio tests on: Nvme1n1 00:07:06.874 Test: blockdev write read block ...passed 00:07:06.874 Test: blockdev write zeroes read block ...passed 00:07:06.874 Test: blockdev write zeroes read no split ...passed 00:07:06.874 Test: blockdev write zeroes read split ...passed 00:07:06.874 Test: blockdev write zeroes read split partial ...passed 00:07:06.874 Test: blockdev reset ...[2024-11-26 04:03:08.586544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:06.874 passed 00:07:06.874 Test: blockdev write read 8 blocks ...[2024-11-26 04:03:08.589054] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:06.874 passed 00:07:06.874 Test: blockdev write read size > 128k ...passed 00:07:06.874 Test: blockdev write read invalid size ...passed 00:07:06.874 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.874 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.874 Test: blockdev write read max offset ...passed 00:07:06.874 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.874 Test: blockdev writev readv 8 blocks ...passed 00:07:06.874 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.874 Test: blockdev writev readv block ...passed 00:07:06.874 Test: blockdev writev readv size > 128k ...passed 00:07:06.874 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.874 Test: blockdev comparev and writev ...[2024-11-26 04:03:08.604594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d060e000 len:0x1000 00:07:06.874 [2024-11-26 04:03:08.604627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.874 passed 00:07:06.874 Test: blockdev nvme passthru rw ...passed 00:07:06.874 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:03:08.606853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.874 [2024-11-26 04:03:08.606882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.874 passed 00:07:06.874 Test: blockdev nvme admin passthru ...passed 00:07:06.874 Test: blockdev copy ...passed 00:07:06.874 Suite: bdevio tests on: Nvme0n1 00:07:06.874 Test: blockdev write read block ...passed 00:07:06.874 Test: blockdev write zeroes read block ...passed 00:07:06.874 Test: blockdev write zeroes read no split ...passed 00:07:06.874 Test: blockdev write zeroes read split ...passed 00:07:07.137 Test: blockdev write zeroes read split partial ...passed 00:07:07.137 Test: blockdev reset ...[2024-11-26 04:03:08.637386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:07.137 [2024-11-26 04:03:08.639327] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:07.137 passed 00:07:07.137 Test: blockdev write read 8 blocks ...passed 00:07:07.137 Test: blockdev write read size > 128k ...passed 00:07:07.137 Test: blockdev write read invalid size ...passed 00:07:07.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.137 Test: blockdev write read max offset ...passed 00:07:07.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.137 Test: blockdev writev readv 8 blocks ...passed 00:07:07.137 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.137 Test: blockdev writev readv block ...passed 00:07:07.137 Test: blockdev writev readv size > 128k ...passed 00:07:07.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.137 Test: blockdev comparev and writev ...passed 00:07:07.137 Test: blockdev nvme passthru rw ...[2024-11-26 04:03:08.652761] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:07.137 separate metadata which is not supported yet. 00:07:07.137 passed 00:07:07.137 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:03:08.654548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:07.137 [2024-11-26 04:03:08.654661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:07.137 passed 00:07:07.137 Test: blockdev nvme admin passthru ...passed 00:07:07.137 Test: blockdev copy ...passed 00:07:07.137 00:07:07.137 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.137 suites 6 6 n/a 0 0 00:07:07.137 tests 138 138 138 0 0 00:07:07.137 asserts 893 893 893 0 n/a 00:07:07.137 00:07:07.137 Elapsed time = 0.628 seconds 00:07:07.137 0 00:07:07.137 04:03:08 -- bdev/blockdev.sh@293 -- # killprocess 72068 00:07:07.137 04:03:08 -- common/autotest_common.sh@936 -- # '[' -z 72068 ']' 00:07:07.137 04:03:08 -- common/autotest_common.sh@940 -- # kill -0 72068 00:07:07.137 04:03:08 -- common/autotest_common.sh@941 -- # uname 00:07:07.137 04:03:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.137 04:03:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72068 00:07:07.137 04:03:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.137 04:03:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.137 04:03:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72068' 00:07:07.137 killing process with pid 72068 00:07:07.137 04:03:08 -- common/autotest_common.sh@955 -- # kill 72068 00:07:07.137 04:03:08 -- common/autotest_common.sh@960 -- # wait 72068 00:07:07.137 04:03:08 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:07.137 00:07:07.137 real 0m1.450s 00:07:07.137 user 0m3.615s 00:07:07.137 sys 0m0.236s 00:07:07.137 04:03:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.137 04:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.137 ************************************ 00:07:07.137 END TEST bdev_bounds 00:07:07.137 ************************************ 00:07:07.397 04:03:08 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.397 04:03:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:07.397 04:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.397 04:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.397 ************************************ 00:07:07.397 START TEST bdev_nbd 00:07:07.397 ************************************ 00:07:07.397 04:03:08 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.397 04:03:08 -- bdev/blockdev.sh@298 -- # uname -s 00:07:07.397 04:03:08 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:07.397 04:03:08 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.397 04:03:08 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.397 04:03:08 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.397 04:03:08 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:07.398 04:03:08 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:07.398 04:03:08 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:07.398 04:03:08 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.398 04:03:08 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:07.398 04:03:08 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:07.398 04:03:08 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.398 04:03:08 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:07.398 04:03:08 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.398 04:03:08 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:07.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.398 04:03:08 -- bdev/blockdev.sh@316 -- # nbd_pid=72117 00:07:07.398 04:03:08 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.398 04:03:08 -- bdev/blockdev.sh@318 -- # waitforlisten 72117 /var/tmp/spdk-nbd.sock 00:07:07.398 04:03:08 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.398 04:03:08 -- common/autotest_common.sh@829 -- # '[' -z 72117 ']' 00:07:07.398 04:03:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.398 04:03:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.398 04:03:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.398 04:03:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.398 04:03:08 -- common/autotest_common.sh@10 -- # set +x 00:07:07.398 [2024-11-26 04:03:08.981385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.398 [2024-11-26 04:03:08.981492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.398 [2024-11-26 04:03:09.128087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.398 [2024-11-26 04:03:09.159652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.336 04:03:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.337 04:03:09 -- common/autotest_common.sh@862 -- # return 0 00:07:08.337 04:03:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@24 -- # local i 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.337 04:03:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:08.337 04:03:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:08.337 04:03:10 -- common/autotest_common.sh@867 -- # local i 00:07:08.337 04:03:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:08.337 04:03:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:08.337 04:03:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:08.337 04:03:10 -- common/autotest_common.sh@871 -- # break 00:07:08.337 04:03:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:08.337 04:03:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:08.337 04:03:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.337 1+0 records in 00:07:08.337 1+0 records out 00:07:08.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133907 s, 3.1 MB/s 00:07:08.337 04:03:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.337 04:03:10 -- common/autotest_common.sh@884 -- # size=4096 00:07:08.337 04:03:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.337 04:03:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:08.337 04:03:10 -- common/autotest_common.sh@887 -- # return 0 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.337 04:03:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:08.598 04:03:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:08.598 04:03:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:08.598 04:03:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:08.599 04:03:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:08.599 04:03:10 -- common/autotest_common.sh@867 -- # local i 00:07:08.599 04:03:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:08.599 04:03:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:08.599 04:03:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:08.599 04:03:10 -- common/autotest_common.sh@871 -- # break 00:07:08.599 04:03:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:08.599 04:03:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:08.599 04:03:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.599 1+0 records in 00:07:08.599 1+0 records out 00:07:08.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771235 s, 5.3 MB/s 00:07:08.599 04:03:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.599 04:03:10 -- common/autotest_common.sh@884 -- # size=4096 00:07:08.599 04:03:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.599 04:03:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:08.599 04:03:10 -- common/autotest_common.sh@887 -- # return 0 00:07:08.599 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.599 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.599 04:03:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:08.860 04:03:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:08.860 04:03:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:08.860 04:03:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:08.860 04:03:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:08.860 04:03:10 -- common/autotest_common.sh@867 -- # local i 00:07:08.860 04:03:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:08.860 04:03:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:08.860 04:03:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:08.861 04:03:10 -- common/autotest_common.sh@871 -- # break 00:07:08.861 04:03:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:08.861 04:03:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:08.861 04:03:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.861 1+0 records in 00:07:08.861 1+0 records out 00:07:08.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066191 s, 6.2 MB/s 00:07:08.861 04:03:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.861 04:03:10 -- common/autotest_common.sh@884 -- # size=4096 00:07:08.861 04:03:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.861 04:03:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:08.861 04:03:10 -- common/autotest_common.sh@887 -- # return 0 00:07:08.861 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.861 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.861 04:03:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.122 04:03:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:09.122 04:03:10 -- common/autotest_common.sh@867 -- # local i 00:07:09.122 04:03:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:09.122 04:03:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:09.122 04:03:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:09.122 04:03:10 -- common/autotest_common.sh@871 -- # break 00:07:09.122 04:03:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:09.122 04:03:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:09.122 04:03:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.122 1+0 records in 00:07:09.122 1+0 records out 00:07:09.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112717 s, 3.6 MB/s 00:07:09.122 04:03:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.122 04:03:10 -- common/autotest_common.sh@884 -- # size=4096 00:07:09.122 04:03:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.122 04:03:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:09.122 04:03:10 -- common/autotest_common.sh@887 -- # return 0 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.122 04:03:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:09.384 04:03:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:09.384 04:03:10 -- common/autotest_common.sh@867 -- # local i 00:07:09.384 04:03:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:09.384 04:03:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:09.384 04:03:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:09.384 04:03:10 -- common/autotest_common.sh@871 -- # break 00:07:09.384 04:03:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:09.384 04:03:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:09.384 04:03:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.384 1+0 records in 00:07:09.384 1+0 records out 00:07:09.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0009719 s, 4.2 MB/s 00:07:09.384 04:03:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.384 04:03:10 -- common/autotest_common.sh@884 -- # size=4096 00:07:09.384 04:03:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.384 04:03:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:09.384 04:03:10 -- common/autotest_common.sh@887 -- # return 0 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.384 04:03:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:09.384 04:03:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:09.384 04:03:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:09.646 04:03:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:09.646 04:03:11 -- common/autotest_common.sh@867 -- # local i 00:07:09.646 04:03:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:09.646 04:03:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:09.646 04:03:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:09.646 04:03:11 -- common/autotest_common.sh@871 -- # break 00:07:09.646 04:03:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:09.646 04:03:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:09.646 04:03:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.646 1+0 records in 00:07:09.646 1+0 records out 00:07:09.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134837 s, 3.0 MB/s 00:07:09.646 04:03:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.646 04:03:11 -- common/autotest_common.sh@884 -- # size=4096 00:07:09.646 04:03:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.646 04:03:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:09.646 04:03:11 -- common/autotest_common.sh@887 -- # return 0 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd0", 00:07:09.646 "bdev_name": "Nvme0n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd1", 00:07:09.646 "bdev_name": "Nvme1n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd2", 00:07:09.646 "bdev_name": "Nvme2n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd3", 00:07:09.646 "bdev_name": "Nvme2n2" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd4", 00:07:09.646 "bdev_name": "Nvme2n3" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd5", 00:07:09.646 "bdev_name": "Nvme3n1" 00:07:09.646 } 00:07:09.646 ]' 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd0", 00:07:09.646 "bdev_name": "Nvme0n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd1", 00:07:09.646 "bdev_name": "Nvme1n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd2", 00:07:09.646 "bdev_name": "Nvme2n1" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd3", 00:07:09.646 "bdev_name": "Nvme2n2" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd4", 00:07:09.646 "bdev_name": "Nvme2n3" 00:07:09.646 }, 00:07:09.646 { 00:07:09.646 "nbd_device": "/dev/nbd5", 00:07:09.646 "bdev_name": "Nvme3n1" 00:07:09.646 } 00:07:09.646 ]' 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@51 -- # local i 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.646 04:03:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@41 -- # break 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.908 04:03:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@41 -- # break 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.167 04:03:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@41 -- # break 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.427 04:03:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@41 -- # break 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.427 04:03:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@41 -- # break 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.689 04:03:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@41 -- # break 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.968 04:03:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@65 -- # true 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@122 -- # count=0 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@127 -- # return 0 00:07:11.229 04:03:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@12 -- # local i 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.229 04:03:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:11.490 /dev/nbd0 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.490 04:03:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:11.490 04:03:13 -- common/autotest_common.sh@867 -- # local i 00:07:11.490 04:03:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:11.490 04:03:13 -- common/autotest_common.sh@871 -- # break 00:07:11.490 04:03:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.490 1+0 records in 00:07:11.490 1+0 records out 00:07:11.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106226 s, 3.9 MB/s 00:07:11.490 04:03:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.490 04:03:13 -- common/autotest_common.sh@884 -- # size=4096 00:07:11.490 04:03:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.490 04:03:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.490 04:03:13 -- common/autotest_common.sh@887 -- # return 0 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:11.490 /dev/nbd1 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.490 04:03:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.490 04:03:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:11.490 04:03:13 -- common/autotest_common.sh@867 -- # local i 00:07:11.490 04:03:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:11.490 04:03:13 -- common/autotest_common.sh@871 -- # break 00:07:11.490 04:03:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.490 04:03:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.490 1+0 records in 00:07:11.490 1+0 records out 00:07:11.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135141 s, 3.0 MB/s 00:07:11.490 04:03:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.752 04:03:13 -- common/autotest_common.sh@884 -- # size=4096 00:07:11.752 04:03:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.752 04:03:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.752 04:03:13 -- common/autotest_common.sh@887 -- # return 0 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:11.752 /dev/nbd10 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:11.752 04:03:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:11.752 04:03:13 -- common/autotest_common.sh@867 -- # local i 00:07:11.752 04:03:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.752 04:03:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.752 04:03:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:11.752 04:03:13 -- common/autotest_common.sh@871 -- # break 00:07:11.752 04:03:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.752 04:03:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.752 04:03:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.752 1+0 records in 00:07:11.752 1+0 records out 00:07:11.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118398 s, 3.5 MB/s 00:07:11.752 04:03:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.752 04:03:13 -- common/autotest_common.sh@884 -- # size=4096 00:07:11.752 04:03:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.752 04:03:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.752 04:03:13 -- common/autotest_common.sh@887 -- # return 0 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.752 04:03:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:12.013 /dev/nbd11 00:07:12.013 04:03:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:12.013 04:03:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:12.013 04:03:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:12.013 04:03:13 -- common/autotest_common.sh@867 -- # local i 00:07:12.013 04:03:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:12.013 04:03:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:12.013 04:03:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:12.013 04:03:13 -- common/autotest_common.sh@871 -- # break 00:07:12.013 04:03:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:12.013 04:03:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:12.013 04:03:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.013 1+0 records in 00:07:12.013 1+0 records out 00:07:12.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477647 s, 8.6 MB/s 00:07:12.013 04:03:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.013 04:03:13 -- common/autotest_common.sh@884 -- # size=4096 00:07:12.013 04:03:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.013 04:03:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:12.013 04:03:13 -- common/autotest_common.sh@887 -- # return 0 00:07:12.013 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.013 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.013 04:03:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:12.278 /dev/nbd12 00:07:12.278 04:03:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:12.278 04:03:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:12.278 04:03:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:12.278 04:03:13 -- common/autotest_common.sh@867 -- # local i 00:07:12.278 04:03:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:12.278 04:03:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:12.278 04:03:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:12.278 04:03:13 -- common/autotest_common.sh@871 -- # break 00:07:12.278 04:03:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:12.278 04:03:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:12.278 04:03:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.278 1+0 records in 00:07:12.278 1+0 records out 00:07:12.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683083 s, 6.0 MB/s 00:07:12.278 04:03:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.278 04:03:13 -- common/autotest_common.sh@884 -- # size=4096 00:07:12.278 04:03:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.278 04:03:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:12.278 04:03:13 -- common/autotest_common.sh@887 -- # return 0 00:07:12.278 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.278 04:03:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.278 04:03:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:12.545 /dev/nbd13 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:12.545 04:03:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:12.545 04:03:14 -- common/autotest_common.sh@867 -- # local i 00:07:12.545 04:03:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:12.545 04:03:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:12.545 04:03:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:12.545 04:03:14 -- common/autotest_common.sh@871 -- # break 00:07:12.545 04:03:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:12.545 04:03:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:12.545 04:03:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.545 1+0 records in 00:07:12.545 1+0 records out 00:07:12.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110233 s, 3.7 MB/s 00:07:12.545 04:03:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.545 04:03:14 -- common/autotest_common.sh@884 -- # size=4096 00:07:12.545 04:03:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.545 04:03:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:12.545 04:03:14 -- common/autotest_common.sh@887 -- # return 0 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.545 04:03:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.806 04:03:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd0", 00:07:12.806 "bdev_name": "Nvme0n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd1", 00:07:12.806 "bdev_name": "Nvme1n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd10", 00:07:12.806 "bdev_name": "Nvme2n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd11", 00:07:12.806 "bdev_name": "Nvme2n2" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd12", 00:07:12.806 "bdev_name": "Nvme2n3" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd13", 00:07:12.806 "bdev_name": "Nvme3n1" 00:07:12.806 } 00:07:12.806 ]' 00:07:12.806 04:03:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.806 04:03:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd0", 00:07:12.806 "bdev_name": "Nvme0n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd1", 00:07:12.806 "bdev_name": "Nvme1n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd10", 00:07:12.806 "bdev_name": "Nvme2n1" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd11", 00:07:12.806 "bdev_name": "Nvme2n2" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd12", 00:07:12.806 "bdev_name": "Nvme2n3" 00:07:12.806 }, 00:07:12.806 { 00:07:12.806 "nbd_device": "/dev/nbd13", 00:07:12.806 "bdev_name": "Nvme3n1" 00:07:12.806 } 00:07:12.806 ]' 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.807 /dev/nbd1 00:07:12.807 /dev/nbd10 00:07:12.807 /dev/nbd11 00:07:12.807 /dev/nbd12 00:07:12.807 /dev/nbd13' 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.807 /dev/nbd1 00:07:12.807 /dev/nbd10 00:07:12.807 /dev/nbd11 00:07:12.807 /dev/nbd12 00:07:12.807 /dev/nbd13' 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@65 -- # count=6 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@95 -- # count=6 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:12.807 256+0 records in 00:07:12.807 256+0 records out 00:07:12.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577616 s, 182 MB/s 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.807 256+0 records in 00:07:12.807 256+0 records out 00:07:12.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175303 s, 6.0 MB/s 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.807 04:03:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.068 256+0 records in 00:07:13.068 256+0 records out 00:07:13.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146115 s, 7.2 MB/s 00:07:13.068 04:03:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.068 04:03:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:13.068 256+0 records in 00:07:13.068 256+0 records out 00:07:13.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0956472 s, 11.0 MB/s 00:07:13.068 04:03:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.068 04:03:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:13.329 256+0 records in 00:07:13.329 256+0 records out 00:07:13.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0920535 s, 11.4 MB/s 00:07:13.329 04:03:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.329 04:03:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:13.329 256+0 records in 00:07:13.329 256+0 records out 00:07:13.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136912 s, 7.7 MB/s 00:07:13.329 04:03:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.329 04:03:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:13.590 256+0 records in 00:07:13.590 256+0 records out 00:07:13.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0908493 s, 11.5 MB/s 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@51 -- # local i 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.590 04:03:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@41 -- # break 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@41 -- # break 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.850 04:03:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@41 -- # break 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.111 04:03:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@41 -- # break 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.370 04:03:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@41 -- # break 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@41 -- # break 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.630 04:03:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@65 -- # true 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.889 04:03:16 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:14.889 04:03:16 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:15.149 malloc_lvol_verify 00:07:15.149 04:03:16 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:15.409 bc3bedfb-f932-48ff-a337-7e9b3d3fcf80 00:07:15.409 04:03:16 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:15.409 046fe41e-a183-4188-9efd-581d9af77913 00:07:15.409 04:03:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:15.670 /dev/nbd0 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:15.670 mke2fs 1.47.0 (5-Feb-2023) 00:07:15.670 Discarding device blocks: 0/4096 done 00:07:15.670 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:15.670 00:07:15.670 Allocating group tables: 0/1 done 00:07:15.670 Writing inode tables: 0/1 done 00:07:15.670 Creating journal (1024 blocks): done 00:07:15.670 Writing superblocks and filesystem accounting information: 0/1 done 00:07:15.670 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@51 -- # local i 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.670 04:03:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@41 -- # break 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:15.930 04:03:17 -- bdev/nbd_common.sh@147 -- # return 0 00:07:15.930 04:03:17 -- bdev/blockdev.sh@324 -- # killprocess 72117 00:07:15.930 04:03:17 -- common/autotest_common.sh@936 -- # '[' -z 72117 ']' 00:07:15.930 04:03:17 -- common/autotest_common.sh@940 -- # kill -0 72117 00:07:15.930 04:03:17 -- common/autotest_common.sh@941 -- # uname 00:07:15.930 04:03:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.930 04:03:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72117 00:07:15.930 killing process with pid 72117 00:07:15.930 04:03:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.930 04:03:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.930 04:03:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72117' 00:07:15.930 04:03:17 -- common/autotest_common.sh@955 -- # kill 72117 00:07:15.931 04:03:17 -- common/autotest_common.sh@960 -- # wait 72117 00:07:16.192 ************************************ 00:07:16.192 END TEST bdev_nbd 00:07:16.192 ************************************ 00:07:16.192 04:03:17 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:16.192 00:07:16.192 real 0m8.876s 00:07:16.192 user 0m12.666s 00:07:16.192 sys 0m2.991s 00:07:16.192 04:03:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.192 04:03:17 -- common/autotest_common.sh@10 -- # set +x 00:07:16.192 04:03:17 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:16.192 04:03:17 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:16.192 skipping fio tests on NVMe due to multi-ns failures. 00:07:16.192 04:03:17 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:16.192 04:03:17 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:16.192 04:03:17 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.192 04:03:17 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:16.192 04:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.192 04:03:17 -- common/autotest_common.sh@10 -- # set +x 00:07:16.192 ************************************ 00:07:16.192 START TEST bdev_verify 00:07:16.192 ************************************ 00:07:16.192 04:03:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.192 [2024-11-26 04:03:17.890917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.192 [2024-11-26 04:03:17.891038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72488 ] 00:07:16.454 [2024-11-26 04:03:18.038595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.454 [2024-11-26 04:03:18.071833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.454 [2024-11-26 04:03:18.071837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.716 Running I/O for 5 seconds... 00:07:22.004 00:07:22.004 Latency(us) 00:07:22.004 [2024-11-26T04:03:23.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.004 Nvme0n1 : 5.04 2317.91 9.05 0.00 0.00 55050.27 10586.58 68157.44 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.004 Nvme0n1 : 5.04 2742.93 10.71 0.00 0.00 46544.67 6175.51 65737.65 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0x0 length 0xa0000 00:07:22.004 Nvme1n1 : 5.04 2322.83 9.07 0.00 0.00 54837.49 5242.88 65334.35 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0xa0000 length 0xa0000 00:07:22.004 Nvme1n1 : 5.04 2741.89 10.71 0.00 0.00 46522.37 7360.20 63317.86 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0x0 length 0x80000 00:07:22.004 Nvme2n1 : 5.05 2327.17 9.09 0.00 0.00 54550.18 6225.92 63721.16 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0x80000 length 0x80000 00:07:22.004 Nvme2n1 : 5.04 2741.19 10.71 0.00 0.00 46455.89 7965.14 58881.58 00:07:22.004 [2024-11-26T04:03:23.772Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.004 Verification LBA range: start 0x0 length 0x80000 00:07:22.004 Nvme2n2 : 5.06 2341.11 9.14 0.00 0.00 54169.80 2457.60 62914.56 00:07:22.004 [2024-11-26T04:03:23.773Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.005 Verification LBA range: start 0x80000 length 0x80000 00:07:22.005 Nvme2n2 : 5.05 2747.77 10.73 0.00 0.00 46304.56 1915.67 52428.80 00:07:22.005 [2024-11-26T04:03:23.773Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.005 Verification LBA range: start 0x0 length 0x80000 00:07:22.005 Nvme2n3 : 5.06 2340.56 9.14 0.00 0.00 54149.58 2961.72 61301.37 00:07:22.005 [2024-11-26T04:03:23.773Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.005 Verification LBA range: start 0x80000 length 0x80000 00:07:22.005 Nvme2n3 : 5.05 2746.95 10.73 0.00 0.00 46271.12 2634.04 51017.26 00:07:22.005 [2024-11-26T04:03:23.773Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.005 Verification LBA range: start 0x0 length 0x20000 00:07:22.005 Nvme3n1 : 5.06 2340.00 9.14 0.00 0.00 54123.28 3453.24 64931.05 00:07:22.005 [2024-11-26T04:03:23.773Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.005 Verification LBA range: start 0x20000 length 0x20000 00:07:22.005 Nvme3n1 : 5.05 2751.74 10.75 0.00 0.00 46145.95 2369.38 48799.11 00:07:22.005 [2024-11-26T04:03:23.773Z] =================================================================================================================== 00:07:22.005 [2024-11-26T04:03:23.773Z] Total : 30462.06 118.99 0.00 0.00 50098.28 1915.67 68157.44 00:07:40.177 ************************************ 00:07:40.177 END TEST bdev_verify 00:07:40.177 00:07:40.177 real 0m21.360s 00:07:40.177 user 0m41.807s 00:07:40.177 sys 0m0.308s 00:07:40.177 04:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.177 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.177 ************************************ 00:07:40.177 04:03:39 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.177 04:03:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:40.177 04:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.177 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.177 ************************************ 00:07:40.177 START TEST bdev_verify_big_io 00:07:40.177 ************************************ 00:07:40.177 04:03:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.177 [2024-11-26 04:03:39.326376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.177 [2024-11-26 04:03:39.326490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:07:40.177 [2024-11-26 04:03:39.474322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.177 [2024-11-26 04:03:39.508634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.177 [2024-11-26 04:03:39.508832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.177 Running I/O for 5 seconds... 00:07:44.432 00:07:44.432 Latency(us) 00:07:44.432 [2024-11-26T04:03:46.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0xbd0b 00:07:44.432 Nvme0n1 : 5.46 176.17 11.01 0.00 0.00 714960.16 7158.55 735616.39 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:44.432 Nvme0n1 : 5.31 295.36 18.46 0.00 0.00 421934.48 86305.87 784012.21 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0xa000 00:07:44.432 Nvme1n1 : 5.46 176.11 11.01 0.00 0.00 705579.59 7461.02 667862.25 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0xa000 length 0xa000 00:07:44.432 Nvme1n1 : 5.37 299.56 18.72 0.00 0.00 409038.58 66947.54 703352.52 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0x8000 00:07:44.432 Nvme2n1 : 5.46 176.06 11.00 0.00 0.00 694584.31 7612.26 629145.60 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x8000 length 0x8000 00:07:44.432 Nvme2n1 : 5.43 305.69 19.11 0.00 0.00 395991.99 49404.06 629145.60 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0x8000 00:07:44.432 Nvme2n2 : 5.47 176.00 11.00 0.00 0.00 683348.81 7511.43 629145.60 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x8000 length 0x8000 00:07:44.432 Nvme2n2 : 5.45 313.02 19.56 0.00 0.00 382383.02 22383.06 561391.46 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0x8000 00:07:44.432 Nvme2n3 : 5.47 184.95 11.56 0.00 0.00 643538.86 3402.83 635598.38 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x8000 length 0x8000 00:07:44.432 Nvme2n3 : 5.47 319.75 19.98 0.00 0.00 369489.37 10233.70 493637.32 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x0 length 0x2000 00:07:44.432 Nvme3n1 : 5.47 184.89 11.56 0.00 0.00 633812.45 4108.60 642051.15 00:07:44.432 [2024-11-26T04:03:46.200Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.432 Verification LBA range: start 0x2000 length 0x2000 00:07:44.432 Nvme3n1 : 5.47 333.76 20.86 0.00 0.00 348499.69 1663.61 442015.11 00:07:44.432 [2024-11-26T04:03:46.200Z] =================================================================================================================== 00:07:44.432 [2024-11-26T04:03:46.200Z] Total : 2941.32 183.83 0.00 0.00 493857.96 1663.61 784012.21 00:07:45.004 ************************************ 00:07:45.004 END TEST bdev_verify_big_io 00:07:45.004 ************************************ 00:07:45.004 00:07:45.004 real 0m7.409s 00:07:45.004 user 0m14.099s 00:07:45.004 sys 0m0.210s 00:07:45.004 04:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.004 04:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.004 04:03:46 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.004 04:03:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:45.004 04:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.004 04:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.004 ************************************ 00:07:45.004 START TEST bdev_write_zeroes 00:07:45.004 ************************************ 00:07:45.004 04:03:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.265 [2024-11-26 04:03:46.798807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.265 [2024-11-26 04:03:46.798919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72798 ] 00:07:45.265 [2024-11-26 04:03:46.942237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.265 [2024-11-26 04:03:46.983943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.834 Running I/O for 1 seconds... 00:07:46.772 00:07:46.772 Latency(us) 00:07:46.772 [2024-11-26T04:03:48.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme0n1 : 1.01 9790.58 38.24 0.00 0.00 13034.55 4940.41 25811.10 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme1n1 : 1.01 9794.19 38.26 0.00 0.00 13012.09 8721.33 22786.36 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme2n1 : 1.02 9823.76 38.37 0.00 0.00 12945.78 7057.72 22584.71 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme2n2 : 1.02 9812.50 38.33 0.00 0.00 12908.08 7561.85 22080.59 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme2n3 : 1.02 9850.39 38.48 0.00 0.00 12840.52 4940.41 22282.24 00:07:46.772 [2024-11-26T04:03:48.540Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.772 Nvme3n1 : 1.02 9839.00 38.43 0.00 0.00 12820.77 5394.12 22080.59 00:07:46.772 [2024-11-26T04:03:48.540Z] =================================================================================================================== 00:07:46.772 [2024-11-26T04:03:48.540Z] Total : 58910.42 230.12 0.00 0.00 12926.53 4940.41 25811.10 00:07:47.031 00:07:47.031 real 0m1.849s 00:07:47.031 user 0m1.549s 00:07:47.031 sys 0m0.185s 00:07:47.031 ************************************ 00:07:47.031 END TEST bdev_write_zeroes 00:07:47.031 ************************************ 00:07:47.031 04:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.031 04:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:47.031 04:03:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.031 04:03:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:47.031 04:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.031 04:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:47.031 ************************************ 00:07:47.031 START TEST bdev_json_nonenclosed 00:07:47.031 ************************************ 00:07:47.031 04:03:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.031 [2024-11-26 04:03:48.714245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.031 [2024-11-26 04:03:48.714365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72834 ] 00:07:47.289 [2024-11-26 04:03:48.862437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.289 [2024-11-26 04:03:48.894452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.289 [2024-11-26 04:03:48.894616] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:47.289 [2024-11-26 04:03:48.894639] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.289 00:07:47.289 real 0m0.317s 00:07:47.289 user 0m0.125s 00:07:47.289 sys 0m0.089s 00:07:47.289 04:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.289 ************************************ 00:07:47.289 END TEST bdev_json_nonenclosed 00:07:47.289 ************************************ 00:07:47.289 04:03:48 -- common/autotest_common.sh@10 -- # set +x 00:07:47.289 04:03:49 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.289 04:03:49 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:47.289 04:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.289 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:47.289 ************************************ 00:07:47.289 START TEST bdev_json_nonarray 00:07:47.289 ************************************ 00:07:47.289 04:03:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.549 [2024-11-26 04:03:49.093910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.549 [2024-11-26 04:03:49.094211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72865 ] 00:07:47.549 [2024-11-26 04:03:49.242702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.549 [2024-11-26 04:03:49.274152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.549 [2024-11-26 04:03:49.274462] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:47.549 [2024-11-26 04:03:49.274489] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.808 00:07:47.808 real 0m0.319s 00:07:47.808 user 0m0.112s 00:07:47.808 sys 0m0.103s 00:07:47.808 ************************************ 00:07:47.808 END TEST bdev_json_nonarray 00:07:47.808 ************************************ 00:07:47.808 04:03:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.808 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:47.808 04:03:49 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:07:47.808 04:03:49 -- bdev/blockdev.sh@809 -- # cleanup 00:07:47.808 04:03:49 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:47.808 04:03:49 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.808 04:03:49 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:07:47.808 04:03:49 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:07:47.808 ************************************ 00:07:47.808 END TEST blockdev_nvme 00:07:47.808 ************************************ 00:07:47.808 00:07:47.808 real 0m44.611s 00:07:47.808 user 1m16.373s 00:07:47.808 sys 0m4.981s 00:07:47.808 04:03:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.808 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:47.808 04:03:49 -- spdk/autotest.sh@206 -- # uname -s 00:07:47.808 04:03:49 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:07:47.808 04:03:49 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:47.808 04:03:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:47.808 04:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.808 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:47.808 ************************************ 00:07:47.808 START TEST blockdev_nvme_gpt 00:07:47.808 ************************************ 00:07:47.808 04:03:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:47.808 * Looking for test storage... 00:07:47.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:47.808 04:03:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.808 04:03:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.808 04:03:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:48.068 04:03:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:48.068 04:03:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:48.068 04:03:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:48.068 04:03:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:48.068 04:03:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:48.068 04:03:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:48.068 04:03:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.068 04:03:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:48.068 04:03:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:48.068 04:03:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:48.068 04:03:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:48.068 04:03:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:48.068 04:03:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:48.068 04:03:49 -- scripts/common.sh@344 -- # : 1 00:07:48.068 04:03:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:48.068 04:03:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.068 04:03:49 -- scripts/common.sh@364 -- # decimal 1 00:07:48.068 04:03:49 -- scripts/common.sh@352 -- # local d=1 00:07:48.068 04:03:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.068 04:03:49 -- scripts/common.sh@354 -- # echo 1 00:07:48.068 04:03:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:48.068 04:03:49 -- scripts/common.sh@365 -- # decimal 2 00:07:48.068 04:03:49 -- scripts/common.sh@352 -- # local d=2 00:07:48.068 04:03:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.068 04:03:49 -- scripts/common.sh@354 -- # echo 2 00:07:48.069 04:03:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:48.069 04:03:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:48.069 04:03:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:48.069 04:03:49 -- scripts/common.sh@367 -- # return 0 00:07:48.069 04:03:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.069 04:03:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.069 --rc genhtml_branch_coverage=1 00:07:48.069 --rc genhtml_function_coverage=1 00:07:48.069 --rc genhtml_legend=1 00:07:48.069 --rc geninfo_all_blocks=1 00:07:48.069 --rc geninfo_unexecuted_blocks=1 00:07:48.069 00:07:48.069 ' 00:07:48.069 04:03:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.069 --rc genhtml_branch_coverage=1 00:07:48.069 --rc genhtml_function_coverage=1 00:07:48.069 --rc genhtml_legend=1 00:07:48.069 --rc geninfo_all_blocks=1 00:07:48.069 --rc geninfo_unexecuted_blocks=1 00:07:48.069 00:07:48.069 ' 00:07:48.069 04:03:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.069 --rc genhtml_branch_coverage=1 00:07:48.069 --rc genhtml_function_coverage=1 00:07:48.069 --rc genhtml_legend=1 00:07:48.069 --rc geninfo_all_blocks=1 00:07:48.069 --rc geninfo_unexecuted_blocks=1 00:07:48.069 00:07:48.069 ' 00:07:48.069 04:03:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:48.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.069 --rc genhtml_branch_coverage=1 00:07:48.069 --rc genhtml_function_coverage=1 00:07:48.069 --rc genhtml_legend=1 00:07:48.069 --rc geninfo_all_blocks=1 00:07:48.069 --rc geninfo_unexecuted_blocks=1 00:07:48.069 00:07:48.069 ' 00:07:48.069 04:03:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:48.069 04:03:49 -- bdev/nbd_common.sh@6 -- # set -e 00:07:48.069 04:03:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:48.069 04:03:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:48.069 04:03:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:48.069 04:03:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:48.069 04:03:49 -- bdev/blockdev.sh@18 -- # : 00:07:48.069 04:03:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:48.069 04:03:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:48.069 04:03:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:48.069 04:03:49 -- bdev/blockdev.sh@672 -- # uname -s 00:07:48.069 04:03:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:48.069 04:03:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:48.069 04:03:49 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:07:48.069 04:03:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:48.069 04:03:49 -- bdev/blockdev.sh@682 -- # dek= 00:07:48.069 04:03:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:48.069 04:03:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:48.069 04:03:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:48.069 04:03:49 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:07:48.069 04:03:49 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:07:48.069 04:03:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:48.069 04:03:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=72937 00:07:48.069 04:03:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:48.069 04:03:49 -- bdev/blockdev.sh@47 -- # waitforlisten 72937 00:07:48.069 04:03:49 -- common/autotest_common.sh@829 -- # '[' -z 72937 ']' 00:07:48.069 04:03:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.069 04:03:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.069 04:03:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:48.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.069 04:03:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.069 04:03:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.069 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:48.069 [2024-11-26 04:03:49.696158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.069 [2024-11-26 04:03:49.696443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72937 ] 00:07:48.330 [2024-11-26 04:03:49.843686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.330 [2024-11-26 04:03:49.875444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.330 [2024-11-26 04:03:49.875803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.901 04:03:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.901 04:03:50 -- common/autotest_common.sh@862 -- # return 0 00:07:48.901 04:03:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:48.901 04:03:50 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:07:48.901 04:03:50 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:49.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.420 Waiting for block devices as requested 00:07:49.420 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.420 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.420 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.679 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.955 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:07:54.955 04:03:56 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:07:54.955 04:03:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:07:54.955 04:03:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:07:54.955 04:03:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:54.955 04:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:07:54.955 04:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:54.955 04:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:54.955 04:03:56 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:07:54.955 04:03:56 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:07:54.955 04:03:56 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:07:54.955 04:03:56 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:54.955 04:03:56 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:07:54.955 04:03:56 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:07:54.955 04:03:56 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:07:54.955 04:03:56 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:07:54.955 BYT; 00:07:54.955 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:54.955 04:03:56 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:07:54.955 BYT; 00:07:54.955 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:54.955 04:03:56 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:07:54.955 04:03:56 -- bdev/blockdev.sh@114 -- # break 00:07:54.955 04:03:56 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:07:54.955 04:03:56 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:54.955 04:03:56 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:54.955 04:03:56 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:54.955 04:03:56 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:07:54.955 04:03:56 -- scripts/common.sh@410 -- # local spdk_guid 00:07:54.955 04:03:56 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:54.955 04:03:56 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.955 04:03:56 -- scripts/common.sh@415 -- # IFS='()' 00:07:54.955 04:03:56 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:07:54.955 04:03:56 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.955 04:03:56 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:54.955 04:03:56 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.955 04:03:56 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.955 04:03:56 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:54.955 04:03:56 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:07:54.955 04:03:56 -- scripts/common.sh@422 -- # local spdk_guid 00:07:54.955 04:03:56 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:54.955 04:03:56 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.955 04:03:56 -- scripts/common.sh@427 -- # IFS='()' 00:07:54.955 04:03:56 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:07:54.955 04:03:56 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:54.955 04:03:56 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:54.955 04:03:56 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.955 04:03:56 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.955 04:03:56 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:54.955 04:03:56 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:07:55.893 The operation has completed successfully. 00:07:55.893 04:03:57 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:07:56.829 The operation has completed successfully. 00:07:56.829 04:03:58 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:57.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.765 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.765 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.765 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.765 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.765 04:03:59 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:07:57.765 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.765 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:57.765 [] 00:07:57.765 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.765 04:03:59 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:07:57.765 04:03:59 -- bdev/blockdev.sh@79 -- # local json 00:07:57.765 04:03:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:57.765 04:03:59 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.765 04:03:59 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:57.765 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.765 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.024 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.024 04:03:59 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:58.025 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.025 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.025 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.025 04:03:59 -- bdev/blockdev.sh@738 -- # cat 00:07:58.025 04:03:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:58.025 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.025 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.284 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.284 04:03:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:58.284 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.284 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.284 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.284 04:03:59 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:58.284 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.284 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.284 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.284 04:03:59 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:58.284 04:03:59 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:58.284 04:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.284 04:03:59 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:58.284 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:58.284 04:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.284 04:03:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:58.285 04:03:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a542d23d-39b9-4370-ac42-c88e8c941dfe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a542d23d-39b9-4370-ac42-c88e8c941dfe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "df90f393-3c32-4f1d-b8be-ef679bbd4c1d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df90f393-3c32-4f1d-b8be-ef679bbd4c1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5b74247e-6abd-4219-9681-41b8bbd321fc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5b74247e-6abd-4219-9681-41b8bbd321fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "49a3e399-9e44-4f39-8ae0-1ce2906e2d49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "49a3e399-9e44-4f39-8ae0-1ce2906e2d49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f45a74b0-4450-4d2e-8f1b-ac56f5ccdb75"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f45a74b0-4450-4d2e-8f1b-ac56f5ccdb75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:58.285 04:03:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:58.285 04:03:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:58.285 04:03:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:07:58.285 04:03:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:58.285 04:03:59 -- bdev/blockdev.sh@752 -- # killprocess 72937 00:07:58.285 04:03:59 -- common/autotest_common.sh@936 -- # '[' -z 72937 ']' 00:07:58.285 04:03:59 -- common/autotest_common.sh@940 -- # kill -0 72937 00:07:58.285 04:03:59 -- common/autotest_common.sh@941 -- # uname 00:07:58.285 04:03:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:58.285 04:03:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72937 00:07:58.285 04:03:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:58.285 killing process with pid 72937 00:07:58.285 04:03:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:58.285 04:03:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72937' 00:07:58.285 04:03:59 -- common/autotest_common.sh@955 -- # kill 72937 00:07:58.285 04:03:59 -- common/autotest_common.sh@960 -- # wait 72937 00:07:58.543 04:04:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.543 04:04:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:07:58.543 04:04:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:58.543 04:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.543 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:58.543 ************************************ 00:07:58.543 START TEST bdev_hello_world 00:07:58.543 ************************************ 00:07:58.543 04:04:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:07:58.543 [2024-11-26 04:04:00.250767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.543 [2024-11-26 04:04:00.250880] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:07:58.801 [2024-11-26 04:04:00.396407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.801 [2024-11-26 04:04:00.426525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.060 [2024-11-26 04:04:00.767019] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.060 [2024-11-26 04:04:00.767073] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:07:59.060 [2024-11-26 04:04:00.767091] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.060 [2024-11-26 04:04:00.769362] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.060 [2024-11-26 04:04:00.769908] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.060 [2024-11-26 04:04:00.769941] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.060 [2024-11-26 04:04:00.770089] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.060 00:07:59.060 [2024-11-26 04:04:00.770124] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:59.319 00:07:59.319 real 0m0.727s 00:07:59.319 user 0m0.467s 00:07:59.319 sys 0m0.157s 00:07:59.319 04:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.319 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 ************************************ 00:07:59.319 END TEST bdev_hello_world 00:07:59.319 ************************************ 00:07:59.319 04:04:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:59.319 04:04:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.319 04:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.319 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 ************************************ 00:07:59.319 START TEST bdev_bounds 00:07:59.319 ************************************ 00:07:59.319 04:04:00 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:59.319 04:04:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=73602 00:07:59.319 04:04:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.319 Process bdevio pid: 73602 00:07:59.319 04:04:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 73602' 00:07:59.319 04:04:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.319 04:04:00 -- bdev/blockdev.sh@291 -- # waitforlisten 73602 00:07:59.319 04:04:00 -- common/autotest_common.sh@829 -- # '[' -z 73602 ']' 00:07:59.319 04:04:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.319 04:04:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.319 04:04:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.319 04:04:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.319 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 [2024-11-26 04:04:01.023605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.319 [2024-11-26 04:04:01.023717] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73602 ] 00:07:59.577 [2024-11-26 04:04:01.171605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.577 [2024-11-26 04:04:01.203663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.577 [2024-11-26 04:04:01.203953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.577 [2024-11-26 04:04:01.203992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.145 04:04:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.145 04:04:01 -- common/autotest_common.sh@862 -- # return 0 00:08:00.145 04:04:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:00.407 I/O targets: 00:08:00.407 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:00.408 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:00.408 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:00.408 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.408 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.408 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.408 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:00.408 00:08:00.408 00:08:00.408 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.408 http://cunit.sourceforge.net/ 00:08:00.408 00:08:00.408 00:08:00.408 Suite: bdevio tests on: Nvme3n1 00:08:00.408 Test: blockdev write read block ...passed 00:08:00.408 Test: blockdev write zeroes read block ...passed 00:08:00.408 Test: blockdev write zeroes read no split ...passed 00:08:00.408 Test: blockdev write zeroes read split ...passed 00:08:00.408 Test: blockdev write zeroes read split partial ...passed 00:08:00.408 Test: blockdev reset ...[2024-11-26 04:04:01.948711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:00.408 passed 00:08:00.408 Test: blockdev write read 8 blocks ...[2024-11-26 04:04:01.950327] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.408 passed 00:08:00.408 Test: blockdev write read size > 128k ...passed 00:08:00.408 Test: blockdev write read invalid size ...passed 00:08:00.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.408 Test: blockdev write read max offset ...passed 00:08:00.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.408 Test: blockdev writev readv 8 blocks ...passed 00:08:00.408 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.408 Test: blockdev writev readv block ...passed 00:08:00.408 Test: blockdev writev readv size > 128k ...passed 00:08:00.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.408 Test: blockdev comparev and writev ...[2024-11-26 04:04:01.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3c04000 len:0x1000 00:08:00.408 [2024-11-26 04:04:01.955414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev nvme passthru rw ...passed 00:08:00.408 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.408 Test: blockdev nvme admin passthru ...[2024-11-26 04:04:01.955983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.408 [2024-11-26 04:04:01.956009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev copy ...passed 00:08:00.408 Suite: bdevio tests on: Nvme2n3 00:08:00.408 Test: blockdev write read block ...passed 00:08:00.408 Test: blockdev write zeroes read block ...passed 00:08:00.408 Test: blockdev write zeroes read no split ...passed 00:08:00.408 Test: blockdev write zeroes read split ...passed 00:08:00.408 Test: blockdev write zeroes read split partial ...passed 00:08:00.408 Test: blockdev reset ...[2024-11-26 04:04:01.969454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:00.408 passed 00:08:00.408 Test: blockdev write read 8 blocks ...[2024-11-26 04:04:01.971066] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.408 passed 00:08:00.408 Test: blockdev write read size > 128k ...passed 00:08:00.408 Test: blockdev write read invalid size ...passed 00:08:00.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.408 Test: blockdev write read max offset ...passed 00:08:00.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.408 Test: blockdev writev readv 8 blocks ...passed 00:08:00.408 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.408 Test: blockdev writev readv block ...passed 00:08:00.408 Test: blockdev writev readv size > 128k ...passed 00:08:00.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.408 Test: blockdev comparev and writev ...passed 00:08:00.408 Test: blockdev nvme passthru rw ...[2024-11-26 04:04:01.975440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3c04000 len:0x1000 00:08:00.408 [2024-11-26 04:04:01.975475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.408 Test: blockdev nvme admin passthru ...[2024-11-26 04:04:01.975968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.408 [2024-11-26 04:04:01.975992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev copy ...passed 00:08:00.408 Suite: bdevio tests on: Nvme2n2 00:08:00.408 Test: blockdev write read block ...passed 00:08:00.408 Test: blockdev write zeroes read block ...passed 00:08:00.408 Test: blockdev write zeroes read no split ...passed 00:08:00.408 Test: blockdev write zeroes read split ...passed 00:08:00.408 Test: blockdev write zeroes read split partial ...passed 00:08:00.408 Test: blockdev reset ...[2024-11-26 04:04:01.990044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:00.408 passed 00:08:00.408 Test: blockdev write read 8 blocks ...[2024-11-26 04:04:01.991833] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.408 passed 00:08:00.408 Test: blockdev write read size > 128k ...passed 00:08:00.408 Test: blockdev write read invalid size ...passed 00:08:00.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.408 Test: blockdev write read max offset ...passed 00:08:00.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.408 Test: blockdev writev readv 8 blocks ...passed 00:08:00.408 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.408 Test: blockdev writev readv block ...passed 00:08:00.408 Test: blockdev writev readv size > 128k ...passed 00:08:00.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.408 Test: blockdev comparev and writev ...[2024-11-26 04:04:01.996101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7422000 len:0x1000 00:08:00.408 [2024-11-26 04:04:01.996134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev nvme passthru rw ...passed 00:08:00.408 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:04:01.996625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.408 passed 00:08:00.408 Test: blockdev nvme admin passthru ...[2024-11-26 04:04:01.996648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev copy ...passed 00:08:00.408 Suite: bdevio tests on: Nvme2n1 00:08:00.408 Test: blockdev write read block ...passed 00:08:00.408 Test: blockdev write zeroes read block ...passed 00:08:00.408 Test: blockdev write zeroes read no split ...passed 00:08:00.408 Test: blockdev write zeroes read split ...passed 00:08:00.408 Test: blockdev write zeroes read split partial ...passed 00:08:00.408 Test: blockdev reset ...[2024-11-26 04:04:02.012754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:00.408 passed 00:08:00.408 Test: blockdev write read 8 blocks ...[2024-11-26 04:04:02.014233] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.408 passed 00:08:00.408 Test: blockdev write read size > 128k ...passed 00:08:00.408 Test: blockdev write read invalid size ...passed 00:08:00.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.408 Test: blockdev write read max offset ...passed 00:08:00.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.408 Test: blockdev writev readv 8 blocks ...passed 00:08:00.408 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.408 Test: blockdev writev readv block ...passed 00:08:00.408 Test: blockdev writev readv size > 128k ...passed 00:08:00.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.408 Test: blockdev comparev and writev ...[2024-11-26 04:04:02.019265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3c0d000 len:0x1000 00:08:00.408 [2024-11-26 04:04:02.019298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev nvme passthru rw ...passed 00:08:00.408 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.408 Test: blockdev nvme admin passthru ...[2024-11-26 04:04:02.019835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.408 [2024-11-26 04:04:02.019858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.408 passed 00:08:00.408 Test: blockdev copy ...passed 00:08:00.408 Suite: bdevio tests on: Nvme1n1 00:08:00.408 Test: blockdev write read block ...passed 00:08:00.408 Test: blockdev write zeroes read block ...passed 00:08:00.408 Test: blockdev write zeroes read no split ...passed 00:08:00.408 Test: blockdev write zeroes read split ...passed 00:08:00.408 Test: blockdev write zeroes read split partial ...passed 00:08:00.409 Test: blockdev reset ...[2024-11-26 04:04:02.033906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:00.409 [2024-11-26 04:04:02.035326] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.409 passed 00:08:00.409 Test: blockdev write read 8 blocks ...passed 00:08:00.409 Test: blockdev write read size > 128k ...passed 00:08:00.409 Test: blockdev write read invalid size ...passed 00:08:00.409 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.409 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.409 Test: blockdev write read max offset ...passed 00:08:00.409 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.409 Test: blockdev writev readv 8 blocks ...passed 00:08:00.409 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.409 Test: blockdev writev readv block ...passed 00:08:00.409 Test: blockdev writev readv size > 128k ...passed 00:08:00.409 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.409 Test: blockdev comparev and writev ...[2024-11-26 04:04:02.040290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3832000 len:0x1000 00:08:00.409 [2024-11-26 04:04:02.040325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.409 passed 00:08:00.409 Test: blockdev nvme passthru rw ...passed 00:08:00.409 Test: blockdev nvme passthru vendor specific ...[2024-11-26 04:04:02.040940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.409 [2024-11-26 04:04:02.040965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.409 passed 00:08:00.409 Test: blockdev nvme admin passthru ...passed 00:08:00.409 Test: blockdev copy ...passed 00:08:00.409 Suite: bdevio tests on: Nvme0n1p2 00:08:00.409 Test: blockdev write read block ...passed 00:08:00.409 Test: blockdev write zeroes read block ...passed 00:08:00.409 Test: blockdev write zeroes read no split ...passed 00:08:00.409 Test: blockdev write zeroes read split ...passed 00:08:00.409 Test: blockdev write zeroes read split partial ...passed 00:08:00.409 Test: blockdev reset ...[2024-11-26 04:04:02.056993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:00.409 [2024-11-26 04:04:02.058562] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.409 passed 00:08:00.409 Test: blockdev write read 8 blocks ...passed 00:08:00.409 Test: blockdev write read size > 128k ...passed 00:08:00.409 Test: blockdev write read invalid size ...passed 00:08:00.409 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.409 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.409 Test: blockdev write read max offset ...passed 00:08:00.409 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.409 Test: blockdev writev readv 8 blocks ...passed 00:08:00.409 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.409 Test: blockdev writev readv block ...passed 00:08:00.409 Test: blockdev writev readv size > 128k ...passed 00:08:00.409 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.409 Test: blockdev comparev and writev ...passed 00:08:00.409 Test: blockdev nvme passthru rw ...passed 00:08:00.409 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.409 Test: blockdev nvme admin passthru ...passed 00:08:00.409 Test: blockdev copy ...[2024-11-26 04:04:02.063384] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:00.409 separate metadata which is not supported yet. 00:08:00.409 passed 00:08:00.409 Suite: bdevio tests on: Nvme0n1p1 00:08:00.409 Test: blockdev write read block ...passed 00:08:00.409 Test: blockdev write zeroes read block ...passed 00:08:00.409 Test: blockdev write zeroes read no split ...passed 00:08:00.409 Test: blockdev write zeroes read split ...passed 00:08:00.409 Test: blockdev write zeroes read split partial ...passed 00:08:00.409 Test: blockdev reset ...[2024-11-26 04:04:02.075600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:00.409 passed 00:08:00.409 Test: blockdev write read 8 blocks ...[2024-11-26 04:04:02.076911] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.409 passed 00:08:00.409 Test: blockdev write read size > 128k ...passed 00:08:00.409 Test: blockdev write read invalid size ...passed 00:08:00.409 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.409 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.409 Test: blockdev write read max offset ...passed 00:08:00.409 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.409 Test: blockdev writev readv 8 blocks ...passed 00:08:00.409 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.409 Test: blockdev writev readv block ...passed 00:08:00.409 Test: blockdev writev readv size > 128k ...passed 00:08:00.409 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.409 Test: blockdev comparev and writev ...[2024-11-26 04:04:02.081256] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:00.409 separate metadata which is not supported yet. 00:08:00.409 passed 00:08:00.409 Test: blockdev nvme passthru rw ...passed 00:08:00.409 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.409 Test: blockdev nvme admin passthru ...passed 00:08:00.409 Test: blockdev copy ...passed 00:08:00.409 00:08:00.409 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.409 suites 7 7 n/a 0 0 00:08:00.409 tests 161 161 161 0 0 00:08:00.409 asserts 1006 1006 1006 0 n/a 00:08:00.409 00:08:00.409 Elapsed time = 0.336 seconds 00:08:00.409 0 00:08:00.409 04:04:02 -- bdev/blockdev.sh@293 -- # killprocess 73602 00:08:00.409 04:04:02 -- common/autotest_common.sh@936 -- # '[' -z 73602 ']' 00:08:00.409 04:04:02 -- common/autotest_common.sh@940 -- # kill -0 73602 00:08:00.409 04:04:02 -- common/autotest_common.sh@941 -- # uname 00:08:00.409 04:04:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:00.409 04:04:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73602 00:08:00.409 killing process with pid 73602 00:08:00.409 04:04:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:00.409 04:04:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:00.409 04:04:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73602' 00:08:00.409 04:04:02 -- common/autotest_common.sh@955 -- # kill 73602 00:08:00.409 04:04:02 -- common/autotest_common.sh@960 -- # wait 73602 00:08:00.667 ************************************ 00:08:00.667 END TEST bdev_bounds 00:08:00.667 ************************************ 00:08:00.667 04:04:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:00.667 00:08:00.667 real 0m1.296s 00:08:00.667 user 0m3.285s 00:08:00.667 sys 0m0.254s 00:08:00.667 04:04:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.667 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:00.667 04:04:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.668 04:04:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:00.668 04:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.668 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:00.668 ************************************ 00:08:00.668 START TEST bdev_nbd 00:08:00.668 ************************************ 00:08:00.668 04:04:02 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:00.668 04:04:02 -- bdev/blockdev.sh@298 -- # uname -s 00:08:00.668 04:04:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:00.668 04:04:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.668 04:04:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:00.668 04:04:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.668 04:04:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:00.668 04:04:02 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:00.668 04:04:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:00.668 04:04:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.668 04:04:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:00.668 04:04:02 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:00.668 04:04:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:00.668 04:04:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:00.668 04:04:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.668 04:04:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:00.668 04:04:02 -- bdev/blockdev.sh@316 -- # nbd_pid=73646 00:08:00.668 04:04:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:00.668 04:04:02 -- bdev/blockdev.sh@318 -- # waitforlisten 73646 /var/tmp/spdk-nbd.sock 00:08:00.668 04:04:02 -- common/autotest_common.sh@829 -- # '[' -z 73646 ']' 00:08:00.668 04:04:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.668 04:04:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.668 04:04:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.668 04:04:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.668 04:04:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.668 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:08:00.668 [2024-11-26 04:04:02.376896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.668 [2024-11-26 04:04:02.377005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.926 [2024-11-26 04:04:02.523164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.926 [2024-11-26 04:04:02.552415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.492 04:04:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.492 04:04:03 -- common/autotest_common.sh@862 -- # return 0 00:08:01.492 04:04:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@24 -- # local i 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.492 04:04:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:01.751 04:04:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:01.751 04:04:03 -- common/autotest_common.sh@867 -- # local i 00:08:01.751 04:04:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.751 04:04:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.751 04:04:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:01.751 04:04:03 -- common/autotest_common.sh@871 -- # break 00:08:01.751 04:04:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.751 04:04:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.751 04:04:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.751 1+0 records in 00:08:01.751 1+0 records out 00:08:01.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567348 s, 7.2 MB/s 00:08:01.751 04:04:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.751 04:04:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:01.751 04:04:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.751 04:04:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.751 04:04:03 -- common/autotest_common.sh@887 -- # return 0 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.751 04:04:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:02.009 04:04:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:02.009 04:04:03 -- common/autotest_common.sh@867 -- # local i 00:08:02.009 04:04:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.009 04:04:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.009 04:04:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:02.009 04:04:03 -- common/autotest_common.sh@871 -- # break 00:08:02.009 04:04:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.009 04:04:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.009 04:04:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.009 1+0 records in 00:08:02.009 1+0 records out 00:08:02.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419845 s, 9.8 MB/s 00:08:02.009 04:04:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.009 04:04:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:02.009 04:04:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.009 04:04:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.009 04:04:03 -- common/autotest_common.sh@887 -- # return 0 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.009 04:04:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:02.268 04:04:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:02.268 04:04:03 -- common/autotest_common.sh@867 -- # local i 00:08:02.268 04:04:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.268 04:04:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.268 04:04:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:02.268 04:04:03 -- common/autotest_common.sh@871 -- # break 00:08:02.268 04:04:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.268 04:04:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.268 04:04:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.268 1+0 records in 00:08:02.268 1+0 records out 00:08:02.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047188 s, 8.7 MB/s 00:08:02.268 04:04:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.268 04:04:03 -- common/autotest_common.sh@884 -- # size=4096 00:08:02.268 04:04:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.268 04:04:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.268 04:04:03 -- common/autotest_common.sh@887 -- # return 0 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.268 04:04:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:02.527 04:04:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:02.527 04:04:04 -- common/autotest_common.sh@867 -- # local i 00:08:02.527 04:04:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.527 04:04:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.527 04:04:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:02.527 04:04:04 -- common/autotest_common.sh@871 -- # break 00:08:02.527 04:04:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.527 04:04:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.527 04:04:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.527 1+0 records in 00:08:02.527 1+0 records out 00:08:02.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439508 s, 9.3 MB/s 00:08:02.527 04:04:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.527 04:04:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:02.527 04:04:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.527 04:04:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.527 04:04:04 -- common/autotest_common.sh@887 -- # return 0 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.527 04:04:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:02.785 04:04:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:02.785 04:04:04 -- common/autotest_common.sh@867 -- # local i 00:08:02.785 04:04:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.785 04:04:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.785 04:04:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:02.785 04:04:04 -- common/autotest_common.sh@871 -- # break 00:08:02.785 04:04:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.785 04:04:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.785 04:04:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.785 1+0 records in 00:08:02.785 1+0 records out 00:08:02.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519116 s, 7.9 MB/s 00:08:02.785 04:04:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.785 04:04:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:02.785 04:04:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.785 04:04:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.785 04:04:04 -- common/autotest_common.sh@887 -- # return 0 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.785 04:04:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:03.044 04:04:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:03.044 04:04:04 -- common/autotest_common.sh@867 -- # local i 00:08:03.044 04:04:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:03.044 04:04:04 -- common/autotest_common.sh@871 -- # break 00:08:03.044 04:04:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.044 1+0 records in 00:08:03.044 1+0 records out 00:08:03.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00284199 s, 1.4 MB/s 00:08:03.044 04:04:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.044 04:04:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:03.044 04:04:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.044 04:04:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.044 04:04:04 -- common/autotest_common.sh@887 -- # return 0 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:03.044 04:04:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:03.044 04:04:04 -- common/autotest_common.sh@867 -- # local i 00:08:03.044 04:04:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:03.044 04:04:04 -- common/autotest_common.sh@871 -- # break 00:08:03.044 04:04:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:03.044 04:04:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.044 1+0 records in 00:08:03.044 1+0 records out 00:08:03.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073568 s, 5.6 MB/s 00:08:03.044 04:04:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.044 04:04:04 -- common/autotest_common.sh@884 -- # size=4096 00:08:03.044 04:04:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.044 04:04:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:03.044 04:04:04 -- common/autotest_common.sh@887 -- # return 0 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.044 04:04:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.302 04:04:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd0", 00:08:03.302 "bdev_name": "Nvme0n1p1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd1", 00:08:03.302 "bdev_name": "Nvme0n1p2" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd2", 00:08:03.302 "bdev_name": "Nvme1n1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd3", 00:08:03.302 "bdev_name": "Nvme2n1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd4", 00:08:03.302 "bdev_name": "Nvme2n2" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd5", 00:08:03.302 "bdev_name": "Nvme2n3" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd6", 00:08:03.302 "bdev_name": "Nvme3n1" 00:08:03.302 } 00:08:03.302 ]' 00:08:03.302 04:04:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:03.302 04:04:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd0", 00:08:03.302 "bdev_name": "Nvme0n1p1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd1", 00:08:03.302 "bdev_name": "Nvme0n1p2" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd2", 00:08:03.302 "bdev_name": "Nvme1n1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd3", 00:08:03.302 "bdev_name": "Nvme2n1" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd4", 00:08:03.302 "bdev_name": "Nvme2n2" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd5", 00:08:03.302 "bdev_name": "Nvme2n3" 00:08:03.302 }, 00:08:03.302 { 00:08:03.302 "nbd_device": "/dev/nbd6", 00:08:03.302 "bdev_name": "Nvme3n1" 00:08:03.302 } 00:08:03.302 ]' 00:08:03.302 04:04:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@51 -- # local i 00:08:03.302 04:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.303 04:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@41 -- # break 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.560 04:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@41 -- # break 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.818 04:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@41 -- # break 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@41 -- # break 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.077 04:04:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@41 -- # break 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.335 04:04:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@41 -- # break 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.594 04:04:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@41 -- # break 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.851 04:04:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.109 04:04:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:05.109 04:04:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@65 -- # true 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@65 -- # count=0 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@122 -- # count=0 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@127 -- # return 0 00:08:05.110 04:04:06 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@12 -- # local i 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.110 04:04:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:05.369 /dev/nbd0 00:08:05.369 04:04:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:05.369 04:04:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:05.369 04:04:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:05.369 04:04:06 -- common/autotest_common.sh@867 -- # local i 00:08:05.369 04:04:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.369 04:04:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.369 04:04:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:05.369 04:04:06 -- common/autotest_common.sh@871 -- # break 00:08:05.369 04:04:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.369 04:04:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.369 04:04:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.369 1+0 records in 00:08:05.369 1+0 records out 00:08:05.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368574 s, 11.1 MB/s 00:08:05.369 04:04:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.369 04:04:06 -- common/autotest_common.sh@884 -- # size=4096 00:08:05.369 04:04:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.369 04:04:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.369 04:04:06 -- common/autotest_common.sh@887 -- # return 0 00:08:05.369 04:04:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.369 04:04:06 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.369 04:04:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:05.369 /dev/nbd1 00:08:05.369 04:04:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.369 04:04:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.369 04:04:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:05.369 04:04:07 -- common/autotest_common.sh@867 -- # local i 00:08:05.369 04:04:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.369 04:04:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.369 04:04:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:05.369 04:04:07 -- common/autotest_common.sh@871 -- # break 00:08:05.369 04:04:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.369 04:04:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.369 04:04:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.369 1+0 records in 00:08:05.369 1+0 records out 00:08:05.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647553 s, 6.3 MB/s 00:08:05.369 04:04:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.369 04:04:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:05.369 04:04:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.628 04:04:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.628 04:04:07 -- common/autotest_common.sh@887 -- # return 0 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:05.628 /dev/nbd10 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:05.628 04:04:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:05.628 04:04:07 -- common/autotest_common.sh@867 -- # local i 00:08:05.628 04:04:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.628 04:04:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.628 04:04:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:05.628 04:04:07 -- common/autotest_common.sh@871 -- # break 00:08:05.628 04:04:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.628 04:04:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.628 04:04:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.628 1+0 records in 00:08:05.628 1+0 records out 00:08:05.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714078 s, 5.7 MB/s 00:08:05.628 04:04:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.628 04:04:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:05.628 04:04:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.628 04:04:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.628 04:04:07 -- common/autotest_common.sh@887 -- # return 0 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.628 04:04:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:05.888 /dev/nbd11 00:08:05.888 04:04:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:05.888 04:04:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:05.888 04:04:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:05.888 04:04:07 -- common/autotest_common.sh@867 -- # local i 00:08:05.888 04:04:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.888 04:04:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.888 04:04:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:05.888 04:04:07 -- common/autotest_common.sh@871 -- # break 00:08:05.888 04:04:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.888 04:04:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.888 04:04:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.888 1+0 records in 00:08:05.888 1+0 records out 00:08:05.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101729 s, 4.0 MB/s 00:08:05.888 04:04:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.888 04:04:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:05.888 04:04:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.888 04:04:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.888 04:04:07 -- common/autotest_common.sh@887 -- # return 0 00:08:05.888 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.888 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.888 04:04:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:06.148 /dev/nbd12 00:08:06.148 04:04:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:06.148 04:04:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:06.148 04:04:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:06.148 04:04:07 -- common/autotest_common.sh@867 -- # local i 00:08:06.148 04:04:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.148 04:04:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.148 04:04:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:06.148 04:04:07 -- common/autotest_common.sh@871 -- # break 00:08:06.148 04:04:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.148 04:04:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.148 04:04:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.148 1+0 records in 00:08:06.148 1+0 records out 00:08:06.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939223 s, 4.4 MB/s 00:08:06.148 04:04:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.148 04:04:07 -- common/autotest_common.sh@884 -- # size=4096 00:08:06.148 04:04:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.148 04:04:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.148 04:04:07 -- common/autotest_common.sh@887 -- # return 0 00:08:06.148 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.148 04:04:07 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.148 04:04:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:06.409 /dev/nbd13 00:08:06.409 04:04:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:06.409 04:04:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:06.409 04:04:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:06.409 04:04:08 -- common/autotest_common.sh@867 -- # local i 00:08:06.409 04:04:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.409 04:04:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.409 04:04:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:06.409 04:04:08 -- common/autotest_common.sh@871 -- # break 00:08:06.409 04:04:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.409 04:04:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.409 04:04:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.409 1+0 records in 00:08:06.409 1+0 records out 00:08:06.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107626 s, 3.8 MB/s 00:08:06.409 04:04:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.409 04:04:08 -- common/autotest_common.sh@884 -- # size=4096 00:08:06.409 04:04:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.409 04:04:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.409 04:04:08 -- common/autotest_common.sh@887 -- # return 0 00:08:06.409 04:04:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.409 04:04:08 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.409 04:04:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:06.669 /dev/nbd14 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:06.669 04:04:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:06.669 04:04:08 -- common/autotest_common.sh@867 -- # local i 00:08:06.669 04:04:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:06.669 04:04:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:06.669 04:04:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:06.669 04:04:08 -- common/autotest_common.sh@871 -- # break 00:08:06.669 04:04:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:06.669 04:04:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:06.669 04:04:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.669 1+0 records in 00:08:06.669 1+0 records out 00:08:06.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035103 s, 11.7 MB/s 00:08:06.669 04:04:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.669 04:04:08 -- common/autotest_common.sh@884 -- # size=4096 00:08:06.669 04:04:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.669 04:04:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:06.669 04:04:08 -- common/autotest_common.sh@887 -- # return 0 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.669 04:04:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.930 04:04:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd0", 00:08:06.931 "bdev_name": "Nvme0n1p1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd1", 00:08:06.931 "bdev_name": "Nvme0n1p2" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd10", 00:08:06.931 "bdev_name": "Nvme1n1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd11", 00:08:06.931 "bdev_name": "Nvme2n1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd12", 00:08:06.931 "bdev_name": "Nvme2n2" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd13", 00:08:06.931 "bdev_name": "Nvme2n3" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd14", 00:08:06.931 "bdev_name": "Nvme3n1" 00:08:06.931 } 00:08:06.931 ]' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd0", 00:08:06.931 "bdev_name": "Nvme0n1p1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd1", 00:08:06.931 "bdev_name": "Nvme0n1p2" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd10", 00:08:06.931 "bdev_name": "Nvme1n1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd11", 00:08:06.931 "bdev_name": "Nvme2n1" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd12", 00:08:06.931 "bdev_name": "Nvme2n2" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd13", 00:08:06.931 "bdev_name": "Nvme2n3" 00:08:06.931 }, 00:08:06.931 { 00:08:06.931 "nbd_device": "/dev/nbd14", 00:08:06.931 "bdev_name": "Nvme3n1" 00:08:06.931 } 00:08:06.931 ]' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.931 /dev/nbd1 00:08:06.931 /dev/nbd10 00:08:06.931 /dev/nbd11 00:08:06.931 /dev/nbd12 00:08:06.931 /dev/nbd13 00:08:06.931 /dev/nbd14' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.931 /dev/nbd1 00:08:06.931 /dev/nbd10 00:08:06.931 /dev/nbd11 00:08:06.931 /dev/nbd12 00:08:06.931 /dev/nbd13 00:08:06.931 /dev/nbd14' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@65 -- # count=7 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@95 -- # count=7 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.931 256+0 records in 00:08:06.931 256+0 records out 00:08:06.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694913 s, 151 MB/s 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.931 256+0 records in 00:08:06.931 256+0 records out 00:08:06.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174499 s, 6.0 MB/s 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.931 04:04:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:07.192 256+0 records in 00:08:07.192 256+0 records out 00:08:07.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223483 s, 4.7 MB/s 00:08:07.192 04:04:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.192 04:04:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:07.452 256+0 records in 00:08:07.452 256+0 records out 00:08:07.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186206 s, 5.6 MB/s 00:08:07.452 04:04:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.452 04:04:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:07.712 256+0 records in 00:08:07.712 256+0 records out 00:08:07.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165606 s, 6.3 MB/s 00:08:07.712 04:04:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.712 04:04:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:07.712 256+0 records in 00:08:07.712 256+0 records out 00:08:07.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151833 s, 6.9 MB/s 00:08:07.712 04:04:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.712 04:04:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:07.973 256+0 records in 00:08:07.973 256+0 records out 00:08:07.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190042 s, 5.5 MB/s 00:08:07.973 04:04:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.973 04:04:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:08.235 256+0 records in 00:08:08.235 256+0 records out 00:08:08.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18612 s, 5.6 MB/s 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@51 -- # local i 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.235 04:04:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@41 -- # break 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.497 04:04:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@41 -- # break 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@41 -- # break 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.757 04:04:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@41 -- # break 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.018 04:04:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@41 -- # break 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.399 04:04:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@41 -- # break 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@41 -- # break 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.661 04:04:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@65 -- # true 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@65 -- # count=0 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@104 -- # count=0 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@109 -- # return 0 00:08:09.921 04:04:11 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:09.921 04:04:11 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:10.182 malloc_lvol_verify 00:08:10.182 04:04:11 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:10.486 fa8a41ed-9a55-4c25-8033-6d71d7e99427 00:08:10.486 04:04:11 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:10.486 5b6759d6-5c11-4656-8db1-22c47b96ace6 00:08:10.486 04:04:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:10.747 /dev/nbd0 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:10.747 mke2fs 1.47.0 (5-Feb-2023) 00:08:10.747 Discarding device blocks: 0/4096 done 00:08:10.747 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:10.747 00:08:10.747 Allocating group tables: 0/1 done 00:08:10.747 Writing inode tables: 0/1 done 00:08:10.747 Creating journal (1024 blocks): done 00:08:10.747 Writing superblocks and filesystem accounting information: 0/1 done 00:08:10.747 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@51 -- # local i 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.747 04:04:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@41 -- # break 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:11.007 04:04:12 -- bdev/nbd_common.sh@147 -- # return 0 00:08:11.007 04:04:12 -- bdev/blockdev.sh@324 -- # killprocess 73646 00:08:11.007 04:04:12 -- common/autotest_common.sh@936 -- # '[' -z 73646 ']' 00:08:11.007 04:04:12 -- common/autotest_common.sh@940 -- # kill -0 73646 00:08:11.007 04:04:12 -- common/autotest_common.sh@941 -- # uname 00:08:11.007 04:04:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.007 04:04:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73646 00:08:11.007 killing process with pid 73646 00:08:11.007 04:04:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:11.007 04:04:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:11.007 04:04:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73646' 00:08:11.007 04:04:12 -- common/autotest_common.sh@955 -- # kill 73646 00:08:11.007 04:04:12 -- common/autotest_common.sh@960 -- # wait 73646 00:08:11.269 ************************************ 00:08:11.269 END TEST bdev_nbd 00:08:11.269 ************************************ 00:08:11.269 04:04:12 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:11.269 00:08:11.269 real 0m10.612s 00:08:11.269 user 0m14.866s 00:08:11.269 sys 0m3.634s 00:08:11.269 04:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.269 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.269 04:04:12 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:11.269 04:04:12 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:11.269 skipping fio tests on NVMe due to multi-ns failures. 00:08:11.269 04:04:12 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:11.269 04:04:12 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:11.269 04:04:12 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:11.269 04:04:12 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:11.269 04:04:12 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:11.269 04:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.269 04:04:12 -- common/autotest_common.sh@10 -- # set +x 00:08:11.269 ************************************ 00:08:11.269 START TEST bdev_verify 00:08:11.269 ************************************ 00:08:11.269 04:04:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:11.531 [2024-11-26 04:04:13.066422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.531 [2024-11-26 04:04:13.066579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74057 ] 00:08:11.531 [2024-11-26 04:04:13.221408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.531 [2024-11-26 04:04:13.278842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.531 [2024-11-26 04:04:13.278929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.103 Running I/O for 5 seconds... 00:08:17.393 00:08:17.393 Latency(us) 00:08:17.393 [2024-11-26T04:04:19.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x5e800 00:08:17.393 Nvme0n1p1 : 5.06 1918.95 7.50 0.00 0.00 66527.11 7914.73 87515.77 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x5e800 length 0x5e800 00:08:17.393 Nvme0n1p1 : 5.05 1907.63 7.45 0.00 0.00 66849.55 13107.20 90742.15 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x5e7ff 00:08:17.393 Nvme0n1p2 : 5.06 1920.18 7.50 0.00 0.00 66421.52 7259.37 81869.59 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:17.393 Nvme0n1p2 : 5.06 1912.78 7.47 0.00 0.00 66687.59 6755.25 89935.56 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0xa0000 00:08:17.393 Nvme1n1 : 5.06 1918.50 7.49 0.00 0.00 66260.95 10939.47 63317.86 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0xa0000 length 0xa0000 00:08:17.393 Nvme1n1 : 5.06 1911.11 7.47 0.00 0.00 66584.75 10032.05 70577.23 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x80000 00:08:17.393 Nvme2n1 : 5.07 1916.69 7.49 0.00 0.00 66212.33 14821.22 63721.16 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x80000 length 0x80000 00:08:17.393 Nvme2n1 : 5.07 1909.31 7.46 0.00 0.00 66513.98 12804.73 67350.84 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x80000 00:08:17.393 Nvme2n2 : 5.07 1914.85 7.48 0.00 0.00 66173.23 17946.78 65334.35 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x80000 length 0x80000 00:08:17.393 Nvme2n2 : 5.08 1913.85 7.48 0.00 0.00 66357.39 2029.10 65737.65 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x80000 00:08:17.393 Nvme2n3 : 5.08 1921.20 7.50 0.00 0.00 65974.32 1651.00 64124.46 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x80000 length 0x80000 00:08:17.393 Nvme2n3 : 5.08 1913.36 7.47 0.00 0.00 66297.27 2508.01 62914.56 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x0 length 0x20000 00:08:17.393 Nvme3n1 : 5.08 1919.33 7.50 0.00 0.00 65966.26 5444.53 64527.75 00:08:17.393 [2024-11-26T04:04:19.161Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:17.393 Verification LBA range: start 0x20000 length 0x20000 00:08:17.393 Nvme3n1 : 5.08 1911.50 7.47 0.00 0.00 66211.36 6553.60 64931.05 00:08:17.393 [2024-11-26T04:04:19.161Z] =================================================================================================================== 00:08:17.393 [2024-11-26T04:04:19.161Z] Total : 26809.25 104.72 0.00 0.00 66359.10 1651.00 90742.15 00:08:19.937 00:08:19.937 real 0m8.134s 00:08:19.937 user 0m15.277s 00:08:19.937 sys 0m0.344s 00:08:19.937 04:04:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.937 ************************************ 00:08:19.937 END TEST bdev_verify 00:08:19.937 ************************************ 00:08:19.937 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:08:19.937 04:04:21 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:19.937 04:04:21 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:19.937 04:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.937 04:04:21 -- common/autotest_common.sh@10 -- # set +x 00:08:19.937 ************************************ 00:08:19.937 START TEST bdev_verify_big_io 00:08:19.937 ************************************ 00:08:19.937 04:04:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:19.937 [2024-11-26 04:04:21.290379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.937 [2024-11-26 04:04:21.290602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74155 ] 00:08:19.937 [2024-11-26 04:04:21.460030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.937 [2024-11-26 04:04:21.524462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.937 [2024-11-26 04:04:21.524583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.507 Running I/O for 5 seconds... 00:08:25.883 00:08:25.883 Latency(us) 00:08:25.883 [2024-11-26T04:04:27.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0x5e80 00:08:25.883 Nvme0n1p1 : 5.43 216.32 13.52 0.00 0.00 581314.36 25609.45 745295.56 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x5e80 length 0x5e80 00:08:25.883 Nvme0n1p1 : 5.40 225.01 14.06 0.00 0.00 555282.18 87919.06 793691.37 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0x5e7f 00:08:25.883 Nvme0n1p2 : 5.46 222.27 13.89 0.00 0.00 561763.03 27625.94 683994.19 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:25.883 Nvme0n1p2 : 5.44 230.50 14.41 0.00 0.00 537074.94 36498.51 719484.46 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0xa000 00:08:25.883 Nvme1n1 : 5.46 222.17 13.89 0.00 0.00 554657.47 29037.49 632371.99 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0xa000 length 0xa000 00:08:25.883 Nvme1n1 : 5.44 230.41 14.40 0.00 0.00 529290.93 37708.41 661409.48 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0x8000 00:08:25.883 Nvme2n1 : 5.47 222.09 13.88 0.00 0.00 547500.09 29642.44 638824.76 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x8000 length 0x8000 00:08:25.883 Nvme2n1 : 5.44 230.33 14.40 0.00 0.00 521464.43 38515.00 600108.11 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0x8000 00:08:25.883 Nvme2n2 : 5.47 222.02 13.88 0.00 0.00 540321.77 29037.49 767880.27 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x8000 length 0x8000 00:08:25.883 Nvme2n2 : 5.46 237.43 14.84 0.00 0.00 501121.19 20568.22 809823.31 00:08:25.883 [2024-11-26T04:04:27.651Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.883 Verification LBA range: start 0x0 length 0x8000 00:08:25.883 Nvme2n3 : 5.47 221.95 13.87 0.00 0.00 533097.28 29642.44 764653.88 00:08:25.883 [2024-11-26T04:04:27.652Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.884 Verification LBA range: start 0x8000 length 0x8000 00:08:25.884 Nvme2n3 : 5.47 246.49 15.41 0.00 0.00 477200.12 7612.26 742069.17 00:08:25.884 [2024-11-26T04:04:27.652Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:25.884 Verification LBA range: start 0x0 length 0x2000 00:08:25.884 Nvme3n1 : 5.48 236.85 14.80 0.00 0.00 495145.56 2306.36 738842.78 00:08:25.884 [2024-11-26T04:04:27.652Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:25.884 Verification LBA range: start 0x2000 length 0x2000 00:08:25.884 Nvme3n1 : 5.49 269.11 16.82 0.00 0.00 431313.05 2772.68 967916.31 00:08:25.884 [2024-11-26T04:04:27.652Z] =================================================================================================================== 00:08:25.884 [2024-11-26T04:04:27.652Z] Total : 3232.97 202.06 0.00 0.00 524021.78 2306.36 967916.31 00:08:27.271 00:08:27.271 real 0m7.588s 00:08:27.271 user 0m14.126s 00:08:27.271 sys 0m0.366s 00:08:27.271 04:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.271 ************************************ 00:08:27.271 END TEST bdev_verify_big_io 00:08:27.271 ************************************ 00:08:27.271 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:27.271 04:04:28 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.271 04:04:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:27.271 04:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.271 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:08:27.271 ************************************ 00:08:27.271 START TEST bdev_write_zeroes 00:08:27.271 ************************************ 00:08:27.271 04:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.271 [2024-11-26 04:04:28.901924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.271 [2024-11-26 04:04:28.902034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74264 ] 00:08:27.535 [2024-11-26 04:04:29.049218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.535 [2024-11-26 04:04:29.081972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.794 Running I/O for 1 seconds... 00:08:28.732 00:08:28.732 Latency(us) 00:08:28.732 [2024-11-26T04:04:30.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme0n1p1 : 1.02 8081.26 31.57 0.00 0.00 15769.19 6251.13 30650.68 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme0n1p2 : 1.02 8065.82 31.51 0.00 0.00 15777.34 6099.89 31658.93 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme1n1 : 1.02 8159.75 31.87 0.00 0.00 15582.37 8771.74 23693.78 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme2n1 : 1.02 8150.17 31.84 0.00 0.00 15580.54 8822.15 23693.78 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme2n2 : 1.02 8140.66 31.80 0.00 0.00 15562.85 9427.10 23693.78 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme2n3 : 1.02 8122.22 31.73 0.00 0.00 15568.81 9275.86 23693.78 00:08:28.732 [2024-11-26T04:04:30.500Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.732 Nvme3n1 : 1.02 8059.41 31.48 0.00 0.00 15637.77 9376.69 23895.43 00:08:28.732 [2024-11-26T04:04:30.500Z] =================================================================================================================== 00:08:28.732 [2024-11-26T04:04:30.500Z] Total : 56779.29 221.79 0.00 0.00 15639.32 6099.89 31658.93 00:08:28.992 00:08:28.992 real 0m1.827s 00:08:28.992 user 0m1.541s 00:08:28.992 sys 0m0.174s 00:08:28.992 04:04:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.992 ************************************ 00:08:28.992 END TEST bdev_write_zeroes 00:08:28.992 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.992 ************************************ 00:08:28.992 04:04:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:28.992 04:04:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:28.992 04:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.992 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.992 ************************************ 00:08:28.992 START TEST bdev_json_nonenclosed 00:08:28.992 ************************************ 00:08:28.992 04:04:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.253 [2024-11-26 04:04:30.796518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.253 [2024-11-26 04:04:30.796618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74295 ] 00:08:29.253 [2024-11-26 04:04:30.941548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.253 [2024-11-26 04:04:30.973664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.253 [2024-11-26 04:04:30.973800] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:29.253 [2024-11-26 04:04:30.973821] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.514 00:08:29.515 real 0m0.316s 00:08:29.515 user 0m0.127s 00:08:29.515 sys 0m0.087s 00:08:29.515 04:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.515 ************************************ 00:08:29.515 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.515 END TEST bdev_json_nonenclosed 00:08:29.515 ************************************ 00:08:29.515 04:04:31 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.515 04:04:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:29.515 04:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.515 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.515 ************************************ 00:08:29.515 START TEST bdev_json_nonarray 00:08:29.515 ************************************ 00:08:29.515 04:04:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.515 [2024-11-26 04:04:31.174397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.515 [2024-11-26 04:04:31.174535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74326 ] 00:08:29.774 [2024-11-26 04:04:31.322395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.774 [2024-11-26 04:04:31.354960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.774 [2024-11-26 04:04:31.355118] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:29.774 [2024-11-26 04:04:31.355142] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.774 00:08:29.774 real 0m0.320s 00:08:29.774 user 0m0.119s 00:08:29.774 sys 0m0.098s 00:08:29.774 04:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.774 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 ************************************ 00:08:29.774 END TEST bdev_json_nonarray 00:08:29.774 ************************************ 00:08:29.774 04:04:31 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:29.774 04:04:31 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:29.774 04:04:31 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:29.774 04:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.774 04:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.774 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 ************************************ 00:08:29.774 START TEST bdev_gpt_uuid 00:08:29.774 ************************************ 00:08:29.774 04:04:31 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:29.774 04:04:31 -- bdev/blockdev.sh@612 -- # local bdev 00:08:29.774 04:04:31 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:29.774 04:04:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=74346 00:08:29.774 04:04:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:29.774 04:04:31 -- bdev/blockdev.sh@47 -- # waitforlisten 74346 00:08:29.774 04:04:31 -- common/autotest_common.sh@829 -- # '[' -z 74346 ']' 00:08:29.774 04:04:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.774 04:04:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.774 04:04:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.774 04:04:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.774 04:04:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:29.774 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:30.033 [2024-11-26 04:04:31.540180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.033 [2024-11-26 04:04:31.540294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74346 ] 00:08:30.033 [2024-11-26 04:04:31.689005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.033 [2024-11-26 04:04:31.720335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.033 [2024-11-26 04:04:31.720537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.600 04:04:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.600 04:04:32 -- common/autotest_common.sh@862 -- # return 0 00:08:30.600 04:04:32 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:30.600 04:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 Some configs were skipped because the RPC state that can call them passed over. 00:08:31.166 04:04:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:08:31.166 04:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.166 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 04:04:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:31.166 04:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.166 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 04:04:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@619 -- # bdev='[ 00:08:31.166 { 00:08:31.166 "name": "Nvme0n1p1", 00:08:31.166 "aliases": [ 00:08:31.166 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:31.166 ], 00:08:31.166 "product_name": "GPT Disk", 00:08:31.166 "block_size": 4096, 00:08:31.166 "num_blocks": 774144, 00:08:31.166 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:31.166 "md_size": 64, 00:08:31.166 "md_interleave": false, 00:08:31.166 "dif_type": 0, 00:08:31.166 "assigned_rate_limits": { 00:08:31.166 "rw_ios_per_sec": 0, 00:08:31.166 "rw_mbytes_per_sec": 0, 00:08:31.166 "r_mbytes_per_sec": 0, 00:08:31.166 "w_mbytes_per_sec": 0 00:08:31.166 }, 00:08:31.166 "claimed": false, 00:08:31.166 "zoned": false, 00:08:31.166 "supported_io_types": { 00:08:31.166 "read": true, 00:08:31.166 "write": true, 00:08:31.166 "unmap": true, 00:08:31.166 "write_zeroes": true, 00:08:31.166 "flush": true, 00:08:31.166 "reset": true, 00:08:31.166 "compare": true, 00:08:31.166 "compare_and_write": false, 00:08:31.166 "abort": true, 00:08:31.166 "nvme_admin": false, 00:08:31.166 "nvme_io": false 00:08:31.166 }, 00:08:31.166 "driver_specific": { 00:08:31.166 "gpt": { 00:08:31.166 "base_bdev": "Nvme0n1", 00:08:31.166 "offset_blocks": 256, 00:08:31.166 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:31.166 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:31.166 "partition_name": "SPDK_TEST_first" 00:08:31.166 } 00:08:31.166 } 00:08:31.166 } 00:08:31.166 ]' 00:08:31.166 04:04:32 -- bdev/blockdev.sh@620 -- # jq -r length 00:08:31.166 04:04:32 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:08:31.166 04:04:32 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:31.166 04:04:32 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:31.166 04:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.166 04:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 04:04:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.166 04:04:32 -- bdev/blockdev.sh@624 -- # bdev='[ 00:08:31.166 { 00:08:31.166 "name": "Nvme0n1p2", 00:08:31.166 "aliases": [ 00:08:31.167 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:31.167 ], 00:08:31.167 "product_name": "GPT Disk", 00:08:31.167 "block_size": 4096, 00:08:31.167 "num_blocks": 774143, 00:08:31.167 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:31.167 "md_size": 64, 00:08:31.167 "md_interleave": false, 00:08:31.167 "dif_type": 0, 00:08:31.167 "assigned_rate_limits": { 00:08:31.167 "rw_ios_per_sec": 0, 00:08:31.167 "rw_mbytes_per_sec": 0, 00:08:31.167 "r_mbytes_per_sec": 0, 00:08:31.167 "w_mbytes_per_sec": 0 00:08:31.167 }, 00:08:31.167 "claimed": false, 00:08:31.167 "zoned": false, 00:08:31.167 "supported_io_types": { 00:08:31.167 "read": true, 00:08:31.167 "write": true, 00:08:31.167 "unmap": true, 00:08:31.167 "write_zeroes": true, 00:08:31.167 "flush": true, 00:08:31.167 "reset": true, 00:08:31.167 "compare": true, 00:08:31.167 "compare_and_write": false, 00:08:31.167 "abort": true, 00:08:31.167 "nvme_admin": false, 00:08:31.167 "nvme_io": false 00:08:31.167 }, 00:08:31.167 "driver_specific": { 00:08:31.167 "gpt": { 00:08:31.167 "base_bdev": "Nvme0n1", 00:08:31.167 "offset_blocks": 774400, 00:08:31.167 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:31.167 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:31.167 "partition_name": "SPDK_TEST_second" 00:08:31.167 } 00:08:31.167 } 00:08:31.167 } 00:08:31.167 ]' 00:08:31.167 04:04:32 -- bdev/blockdev.sh@625 -- # jq -r length 00:08:31.167 04:04:32 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:08:31.167 04:04:32 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:08:31.167 04:04:32 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:31.167 04:04:32 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:31.167 04:04:32 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:31.167 04:04:32 -- bdev/blockdev.sh@629 -- # killprocess 74346 00:08:31.167 04:04:32 -- common/autotest_common.sh@936 -- # '[' -z 74346 ']' 00:08:31.167 04:04:32 -- common/autotest_common.sh@940 -- # kill -0 74346 00:08:31.167 04:04:32 -- common/autotest_common.sh@941 -- # uname 00:08:31.167 04:04:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.167 04:04:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74346 00:08:31.167 04:04:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.167 04:04:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.167 killing process with pid 74346 00:08:31.167 04:04:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74346' 00:08:31.167 04:04:32 -- common/autotest_common.sh@955 -- # kill 74346 00:08:31.167 04:04:32 -- common/autotest_common.sh@960 -- # wait 74346 00:08:31.424 00:08:31.425 real 0m1.709s 00:08:31.425 user 0m1.860s 00:08:31.425 sys 0m0.341s 00:08:31.425 04:04:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.425 04:04:33 -- common/autotest_common.sh@10 -- # set +x 00:08:31.425 ************************************ 00:08:31.425 END TEST bdev_gpt_uuid 00:08:31.425 ************************************ 00:08:31.682 04:04:33 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:08:31.682 04:04:33 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:31.682 04:04:33 -- bdev/blockdev.sh@809 -- # cleanup 00:08:31.682 04:04:33 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:31.682 04:04:33 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:31.682 04:04:33 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:08:31.682 04:04:33 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:08:31.682 04:04:33 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:08:31.682 04:04:33 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:31.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.940 Waiting for block devices as requested 00:08:32.199 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.199 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.199 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:32.199 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:37.471 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:37.471 04:04:38 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:08:37.471 04:04:38 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:08:37.731 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:37.732 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:08:37.732 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:37.732 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:08:37.732 04:04:39 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:08:37.732 ************************************ 00:08:37.732 END TEST blockdev_nvme_gpt 00:08:37.732 00:08:37.732 real 0m49.790s 00:08:37.732 user 1m3.432s 00:08:37.732 sys 0m8.001s 00:08:37.732 04:04:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.732 04:04:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.732 ************************************ 00:08:37.732 04:04:39 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:37.732 04:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.732 04:04:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.732 ************************************ 00:08:37.732 START TEST nvme 00:08:37.732 ************************************ 00:08:37.732 04:04:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:37.732 * Looking for test storage... 00:08:37.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:37.732 04:04:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:37.732 04:04:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:37.732 04:04:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:37.732 04:04:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:37.732 04:04:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:37.732 04:04:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:37.732 04:04:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:37.732 04:04:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:37.732 04:04:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.732 04:04:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:37.732 04:04:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:37.732 04:04:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:37.732 04:04:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:37.732 04:04:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:37.732 04:04:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:37.732 04:04:39 -- scripts/common.sh@344 -- # : 1 00:08:37.732 04:04:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:37.732 04:04:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.732 04:04:39 -- scripts/common.sh@364 -- # decimal 1 00:08:37.732 04:04:39 -- scripts/common.sh@352 -- # local d=1 00:08:37.732 04:04:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.732 04:04:39 -- scripts/common.sh@354 -- # echo 1 00:08:37.732 04:04:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.732 04:04:39 -- scripts/common.sh@365 -- # decimal 2 00:08:37.732 04:04:39 -- scripts/common.sh@352 -- # local d=2 00:08:37.732 04:04:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.732 04:04:39 -- scripts/common.sh@354 -- # echo 2 00:08:37.732 04:04:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.732 04:04:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.732 04:04:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.732 04:04:39 -- scripts/common.sh@367 -- # return 0 00:08:37.732 04:04:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.732 --rc genhtml_branch_coverage=1 00:08:37.732 --rc genhtml_function_coverage=1 00:08:37.732 --rc genhtml_legend=1 00:08:37.732 --rc geninfo_all_blocks=1 00:08:37.732 --rc geninfo_unexecuted_blocks=1 00:08:37.732 00:08:37.732 ' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.732 --rc genhtml_branch_coverage=1 00:08:37.732 --rc genhtml_function_coverage=1 00:08:37.732 --rc genhtml_legend=1 00:08:37.732 --rc geninfo_all_blocks=1 00:08:37.732 --rc geninfo_unexecuted_blocks=1 00:08:37.732 00:08:37.732 ' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.732 --rc genhtml_branch_coverage=1 00:08:37.732 --rc genhtml_function_coverage=1 00:08:37.732 --rc genhtml_legend=1 00:08:37.732 --rc geninfo_all_blocks=1 00:08:37.732 --rc geninfo_unexecuted_blocks=1 00:08:37.732 00:08:37.732 ' 00:08:37.732 04:04:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.732 --rc genhtml_branch_coverage=1 00:08:37.732 --rc genhtml_function_coverage=1 00:08:37.732 --rc genhtml_legend=1 00:08:37.732 --rc geninfo_all_blocks=1 00:08:37.732 --rc geninfo_unexecuted_blocks=1 00:08:37.732 00:08:37.732 ' 00:08:37.732 04:04:39 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:38.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.054 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.054 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.054 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.054 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.054 04:04:40 -- nvme/nvme.sh@79 -- # uname 00:08:39.054 04:04:40 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:39.054 04:04:40 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:39.054 Waiting for stub to ready for secondary processes... 00:08:39.054 04:04:40 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:39.054 04:04:40 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:39.054 04:04:40 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:08:39.054 04:04:40 -- common/autotest_common.sh@1055 -- # echo 0 00:08:39.054 04:04:40 -- common/autotest_common.sh@1057 -- # stubpid=74996 00:08:39.054 04:04:40 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:08:39.054 04:04:40 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:39.054 04:04:40 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:39.054 04:04:40 -- common/autotest_common.sh@1061 -- # [[ -e /proc/74996 ]] 00:08:39.055 04:04:40 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:39.055 [2024-11-26 04:04:40.679285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.055 [2024-11-26 04:04:40.679595] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.999 [2024-11-26 04:04:41.623558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.999 [2024-11-26 04:04:41.643963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.999 [2024-11-26 04:04:41.644468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.999 [2024-11-26 04:04:41.644491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.999 04:04:41 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:39.999 04:04:41 -- common/autotest_common.sh@1061 -- # [[ -e /proc/74996 ]] 00:08:39.999 04:04:41 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:39.999 [2024-11-26 04:04:41.656313] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.999 [2024-11-26 04:04:41.666844] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:39.999 [2024-11-26 04:04:41.667020] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:39.999 [2024-11-26 04:04:41.669328] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.999 [2024-11-26 04:04:41.669610] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:39.999 [2024-11-26 04:04:41.669764] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:39.999 [2024-11-26 04:04:41.672270] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.999 [2024-11-26 04:04:41.672590] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:39.999 [2024-11-26 04:04:41.672851] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:39.999 [2024-11-26 04:04:41.675897] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:39.999 [2024-11-26 04:04:41.676267] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:39.999 [2024-11-26 04:04:41.676397] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:39.999 [2024-11-26 04:04:41.676873] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:39.999 [2024-11-26 04:04:41.677222] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:40.941 done. 00:08:40.941 04:04:42 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:40.941 04:04:42 -- common/autotest_common.sh@1064 -- # echo done. 00:08:40.941 04:04:42 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:40.941 04:04:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:08:40.941 04:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.941 04:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:40.941 ************************************ 00:08:40.941 START TEST nvme_reset 00:08:40.941 ************************************ 00:08:40.941 04:04:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:41.202 Initializing NVMe Controllers 00:08:41.202 Skipping QEMU NVMe SSD at 0000:00:06.0 00:08:41.202 Skipping QEMU NVMe SSD at 0000:00:07.0 00:08:41.202 Skipping QEMU NVMe SSD at 0000:00:09.0 00:08:41.202 Skipping QEMU NVMe SSD at 0000:00:08.0 00:08:41.202 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:41.202 00:08:41.202 real 0m0.194s 00:08:41.202 user 0m0.053s 00:08:41.202 sys 0m0.092s 00:08:41.202 04:04:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.202 ************************************ 00:08:41.202 END TEST nvme_reset 00:08:41.202 04:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:41.202 ************************************ 00:08:41.202 04:04:42 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:41.202 04:04:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.202 04:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.202 04:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:41.202 ************************************ 00:08:41.202 START TEST nvme_identify 00:08:41.202 ************************************ 00:08:41.202 04:04:42 -- common/autotest_common.sh@1114 -- # nvme_identify 00:08:41.202 04:04:42 -- nvme/nvme.sh@12 -- # bdfs=() 00:08:41.202 04:04:42 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:41.202 04:04:42 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:41.202 04:04:42 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:41.202 04:04:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:41.202 04:04:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:41.202 04:04:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:41.202 04:04:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:41.202 04:04:42 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:41.466 04:04:42 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:08:41.466 04:04:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:08:41.466 04:04:42 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:41.466 ===================================================== 00:08:41.466 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:41.466 ===================================================== 00:08:41.467 Controller Capabilities/Features 00:08:41.467 ================================ 00:08:41.467 Vendor ID: 1b36 00:08:41.467 Subsystem Vendor ID: 1af4 00:08:41.467 Serial Number: 12340 00:08:41.467 Model Number: QEMU NVMe Ctrl 00:08:41.467 Firmware Version: 8.0.0 00:08:41.467 Recommended Arb Burst: 6 00:08:41.467 IEEE OUI Identifier: 00 54 52 00:08:41.467 Multi-path I/O 00:08:41.467 May have multiple subsystem ports: No 00:08:41.467 May have multiple controllers: No 00:08:41.467 Associated with SR-IOV VF: No 00:08:41.467 Max Data Transfer Size: 524288 00:08:41.467 Max Number of Namespaces: 256 00:08:41.467 Max Number of I/O Queues: 64 00:08:41.467 NVMe Specification Version (VS): 1.4 00:08:41.467 NVMe Specification Version (Identify): 1.4 00:08:41.467 Maximum Queue Entries: 2048 00:08:41.467 Contiguous Queues Required: Yes 00:08:41.467 Arbitration Mechanisms Supported 00:08:41.467 Weighted Round Robin: Not Supported 00:08:41.467 Vendor Specific: Not Supported 00:08:41.467 Reset Timeout: 7500 ms 00:08:41.467 Doorbell Stride: 4 bytes 00:08:41.467 NVM Subsystem Reset: Not Supported 00:08:41.467 Command Sets Supported 00:08:41.467 NVM Command Set: Supported 00:08:41.467 Boot Partition: Not Supported 00:08:41.467 Memory Page Size Minimum: 4096 bytes 00:08:41.467 Memory Page Size Maximum: 65536 bytes 00:08:41.467 Persistent Memory Region: Not Supported 00:08:41.467 Optional Asynchronous Events Supported 00:08:41.467 Namespace Attribute Notices: Supported 00:08:41.467 Firmware Activation Notices: Not Supported 00:08:41.467 ANA Change Notices: Not Supported 00:08:41.467 PLE Aggregate Log Change Notices: Not Supported 00:08:41.467 LBA Status Info Alert Notices: Not Supported 00:08:41.467 EGE Aggregate Log Change Notices: Not Supported 00:08:41.467 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.467 Zone Descriptor Change Notices: Not Supported 00:08:41.467 Discovery Log Change Notices: Not Supported 00:08:41.467 Controller Attributes 00:08:41.467 128-bit Host Identifier: Not Supported 00:08:41.467 Non-Operational Permissive Mode: Not Supported 00:08:41.467 NVM Sets: Not Supported 00:08:41.467 Read Recovery Levels: Not Supported 00:08:41.467 Endurance Groups: Not Supported 00:08:41.467 Predictable Latency Mode: Not Supported 00:08:41.467 Traffic Based Keep ALive: Not Supported 00:08:41.467 Namespace Granularity: Not Supported 00:08:41.467 SQ Associations: Not Supported 00:08:41.467 UUID List: Not Supported 00:08:41.467 Multi-Domain Subsystem: Not Supported 00:08:41.467 Fixed Capacity Management: Not Supported 00:08:41.467 Variable Capacity Management: Not Supported 00:08:41.467 Delete Endurance Group: Not Supported 00:08:41.467 Delete NVM Set: Not Supported 00:08:41.467 Extended LBA Formats Supported: Supported 00:08:41.467 Flexible Data Placement Supported: Not Supported 00:08:41.467 00:08:41.467 Controller Memory Buffer Support 00:08:41.467 ================================ 00:08:41.467 Supported: No 00:08:41.467 00:08:41.467 Persistent Memory Region Support 00:08:41.467 ================================ 00:08:41.467 Supported: No 00:08:41.467 00:08:41.467 Admin Command Set Attributes 00:08:41.467 ============================ 00:08:41.467 Security Send/Receive: Not Supported 00:08:41.467 Format NVM: Supported 00:08:41.467 Firmware Activate/Download: Not Supported 00:08:41.467 Namespace Management: Supported 00:08:41.467 Device Self-Test: Not Supported 00:08:41.467 Directives: Supported 00:08:41.467 NVMe-MI: Not Supported 00:08:41.467 Virtualization Management: Not Supported 00:08:41.467 Doorbell Buffer Config: Supported 00:08:41.467 Get LBA Status Capability: Not Supported 00:08:41.467 Command & Feature Lockdown Capability: Not Supported 00:08:41.467 Abort Command Limit: 4 00:08:41.467 Async Event Request Limit: 4 00:08:41.467 Number of Firmware Slots: N/A 00:08:41.467 Firmware Slot 1 Read-Only: N/A 00:08:41.467 Firmware Activation Without Reset: N/A 00:08:41.467 Multiple Update Detection Support: N/A 00:08:41.467 Firmware Update Granularity: No Information Provided 00:08:41.467 Per-Namespace SMART Log: Yes 00:08:41.467 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.467 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:41.467 Command Effects Log Page: Supported 00:08:41.467 Get Log Page Extended Data: Supported 00:08:41.467 Telemetry Log Pages: Not Supported 00:08:41.467 Persistent Event Log Pages: Not Supported 00:08:41.467 Supported Log Pages Log Page: May Support 00:08:41.467 Commands Supported & Effects Log Page: Not Supported 00:08:41.467 Feature Identifiers & Effects Log Page:May Support 00:08:41.467 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.467 Data Area 4 for Telemetry Log: Not Supported 00:08:41.467 Error Log Page Entries Supported: 1 00:08:41.467 Keep Alive: Not Supported 00:08:41.467 00:08:41.467 NVM Command Set Attributes 00:08:41.467 ========================== 00:08:41.467 Submission Queue Entry Size 00:08:41.467 Max: 64 00:08:41.467 Min: 64 00:08:41.467 Completion Queue Entry Size 00:08:41.467 Max: 16 00:08:41.467 Min: 16 00:08:41.467 Number of Namespaces: 256 00:08:41.467 Compare Command: Supported 00:08:41.467 Write Uncorrectable Command: Not Supported 00:08:41.467 Dataset Management Command: Supported 00:08:41.467 Write Zeroes Command: Supported 00:08:41.467 Set Features Save Field: Supported 00:08:41.467 Reservations: Not Supported 00:08:41.467 Timestamp: Supported 00:08:41.467 Copy: Supported 00:08:41.467 Volatile Write Cache: Present 00:08:41.467 Atomic Write Unit (Normal): 1 00:08:41.467 Atomic Write Unit (PFail): 1 00:08:41.467 Atomic Compare & Write Unit: 1 00:08:41.467 Fused Compare & Write: Not Supported 00:08:41.467 Scatter-Gather List 00:08:41.467 SGL Command Set: Supported 00:08:41.467 SGL Keyed: Not Supported 00:08:41.467 SGL Bit Bucket Descriptor: Not Supported 00:08:41.467 SGL Metadata Pointer: Not Supported 00:08:41.467 Oversized SGL: Not Supported 00:08:41.467 SGL Metadata Address: Not Supported 00:08:41.467 SGL Offset: Not Supported 00:08:41.467 Transport SGL Data Block: Not Supported 00:08:41.467 Replay Protected Memory Block: Not Supported 00:08:41.467 00:08:41.467 Firmware Slot Information 00:08:41.467 ========================= 00:08:41.467 Active slot: 1 00:08:41.467 Slot 1 Firmware Revision: 1.0 00:08:41.467 00:08:41.467 00:08:41.467 Commands Supported and Effects 00:08:41.467 ============================== 00:08:41.467 Admin Commands 00:08:41.467 -------------- 00:08:41.467 Delete I/O Submission Queue (00h): Supported 00:08:41.467 Create I/O Submission Queue (01h): Supported 00:08:41.467 Get Log Page (02h): Supported 00:08:41.467 Delete I/O Completion Queue (04h): Supported 00:08:41.467 Create I/O Completion Queue (05h): Supported 00:08:41.467 Identify (06h): Supported 00:08:41.467 Abort (08h): Supported 00:08:41.467 Set Features (09h): Supported 00:08:41.467 Get Features (0Ah): Supported 00:08:41.467 Asynchronous Event Request (0Ch): Supported 00:08:41.467 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.467 Directive Send (19h): Supported 00:08:41.467 Directive Receive (1Ah): Supported 00:08:41.467 Virtualization Management (1Ch): Supported 00:08:41.467 Doorbell Buffer Config (7Ch): Supported 00:08:41.467 Format NVM (80h): Supported LBA-Change 00:08:41.467 I/O Commands 00:08:41.467 ------------ 00:08:41.467 Flush (00h): Supported LBA-Change 00:08:41.467 Write (01h): Supported LBA-Change 00:08:41.467 Read (02h): Supported 00:08:41.467 Compare (05h): Supported 00:08:41.467 Write Zeroes (08h): Supported LBA-Change 00:08:41.467 Dataset Management (09h): Supported LBA-Change 00:08:41.467 Unknown (0Ch): Supported 00:08:41.467 Unknown (12h): Supported 00:08:41.467 Copy (19h): Supported LBA-Change 00:08:41.467 Unknown (1Dh): Supported LBA-Change 00:08:41.467 00:08:41.467 Error Log 00:08:41.467 ========= 00:08:41.467 00:08:41.467 Arbitration 00:08:41.467 =========== 00:08:41.467 Arbitration Burst: no limit 00:08:41.467 00:08:41.467 Power Management 00:08:41.467 ================ 00:08:41.467 Number of Power States: 1 00:08:41.467 Current Power State: Power State #0 00:08:41.467 Power State #0: 00:08:41.467 Max Power: 25.00 W 00:08:41.467 Non-Operational State: Operational 00:08:41.467 Entry Latency: 16 microseconds 00:08:41.467 Exit Latency: 4 microseconds 00:08:41.467 Relative Read Throughput: 0 00:08:41.467 Relative Read Latency: 0 00:08:41.468 Relative Write Throughput: 0 00:08:41.468 Relative Write Latency: 0 00:08:41.468 Idle Power: Not Reported 00:08:41.468 Active Power: Not Reported 00:08:41.468 Non-Operational Permissive Mode: Not Supported 00:08:41.468 00:08:41.468 Health Information 00:08:41.468 ================== 00:08:41.468 Critical Warnings: 00:08:41.468 Available Spare Space: OK 00:08:41.468 Temperature: OK 00:08:41.468 Device Reliability: OK 00:08:41.468 Read Only: No 00:08:41.468 Volatile Memory Backup: OK 00:08:41.468 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.468 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.468 Available Spare: 0% 00:08:41.468 Available Spare Threshold: 0% 00:08:41.468 Life Percentage Used: 0% 00:08:41.468 Data Units Read: 1627 00:08:41.468 Data Units Written: 746 00:08:41.468 Host Read Commands: 74008 00:08:41.468 Host Write Commands: 36645 00:08:41.468 Controller Busy Time: 0 minutes 00:08:41.468 Power Cycles: 0 00:08:41.468 Power On Hours: 0 hours 00:08:41.468 Unsafe Shutdowns: 0 00:08:41.468 Unrecoverable Media Errors: 0 00:08:41.468 Lifetime Error Log Entries: 0 00:08:41.468 Warning Temperature Time: 0 minutes 00:08:41.468 Critical Temperature Time: 0 minutes 00:08:41.468 00:08:41.468 Number of Queues 00:08:41.468 ================ 00:08:41.468 Number of I/O Submission Queues: 64 00:08:41.468 Number of I/O Completion Queues: 64 00:08:41.468 00:08:41.468 ZNS Specific Controller Data 00:08:41.468 ============================ 00:08:41.468 Zone Append Size Limit: 0 00:08:41.468 00:08:41.468 00:08:41.468 Active Namespaces 00:08:41.468 ================= 00:08:41.468 Namespace ID:1 00:08:41.468 Error Recovery Timeout: Unlimited 00:08:41.468 Command Set Identifier: NVM (00h) 00:08:41.468 Deallocate: Supported 00:08:41.468 Deallocated/Unwritten Error: Supported 00:08:41.468 Deallocated Read Value: All 0x00 00:08:41.468 Deallocate in Write Zeroes: Not Supported 00:08:41.468 Deallocated Guard Field: 0xFFFF 00:08:41.468 Flush: Supported 00:08:41.468 Reservation: Not Supported 00:08:41.468 Metadata Transferred as: Separate Metadata Buffer 00:08:41.468 Namespace Sharing Capabilities: Private 00:08:41.468 Size (in LBAs): 1548666 (5GiB) 00:08:41.468 Capacity (in LBAs): 1548666 (5GiB) 00:08:41.468 Utilization (in LBAs): 1548666 (5GiB) 00:08:41.468 Thin Provisioning: Not Supported 00:08:41.468 Per-NS Atomic Units: No 00:08:41.468 Maximum Single Source Range Length: 128 00:08:41.468 Maximum Copy Length: 128 00:08:41.468 Maximum Source Range Count: 128 00:08:41.468 NGUID/EUI64 Never Reused: No 00:08:41.468 Namespace Write Protected: No 00:08:41.468 Number of LBA Formats: 8 00:08:41.468 Current LBA Format: LBA Format #07 00:08:41.468 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.468 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.468 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.468 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.468 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.468 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.468 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.468 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.468 00:08:41.468 ===================================================== 00:08:41.468 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:41.468 ===================================================== 00:08:41.468 Controller Capabilities/Features 00:08:41.468 ================================ 00:08:41.468 Vendor ID: 1b36 00:08:41.468 Subsystem Vendor ID: 1af4 00:08:41.468 Serial Number: 12341 00:08:41.468 Model Number: QEMU NVMe Ctrl 00:08:41.468 Firmware Version: 8.0.0 00:08:41.468 Recommended Arb Burst: 6 00:08:41.468 IEEE OUI Identifier: 00 54 52 00:08:41.468 Multi-path I/O 00:08:41.468 May have multiple subsystem ports: No 00:08:41.468 May have multiple controllers: No 00:08:41.468 Associated with SR-IOV VF: No 00:08:41.468 Max Data Transfer Size: 524288 00:08:41.468 Max Number of Namespaces: 256 00:08:41.468 Max Number of I/O Queues: 64 00:08:41.468 NVMe Specification Version (VS): 1.4 00:08:41.468 NVMe Specification Version (Identify): 1.4 00:08:41.468 Maximum Queue Entries: 2048 00:08:41.468 Contiguous Queues Required: Yes 00:08:41.468 Arbitration Mechanisms Supported 00:08:41.468 Weighted Round Robin: Not Supported 00:08:41.468 Vendor Specific: Not Supported 00:08:41.468 Reset Timeout: 7500 ms 00:08:41.468 Doorbell Stride: 4 bytes 00:08:41.468 NVM Subsystem Reset: Not Supported 00:08:41.468 Command Sets Supported 00:08:41.468 NVM Command Set: Supported 00:08:41.468 Boot Partition: Not Supported 00:08:41.468 Memory Page Size Minimum: 4096 bytes 00:08:41.468 Memory Page Size Maximum: 65536 bytes 00:08:41.468 Persistent Memory Region: Not Supported 00:08:41.468 Optional Asynchronous Events Supported 00:08:41.468 Namespace Attribute Notices: Supported 00:08:41.468 Firmware Activation Notices: Not Supported 00:08:41.468 ANA Change Notices: Not Supported 00:08:41.468 PLE Aggregate Log Change Notices: Not Supported 00:08:41.468 LBA Status Info Alert Notices: Not Supported 00:08:41.468 EGE Aggregate Log Change Notices: Not Supported 00:08:41.468 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.468 Zone Descriptor Change Notices: Not Supported 00:08:41.468 Discovery Log Change Notices: Not Supported 00:08:41.468 Controller Attributes 00:08:41.468 128-bit Host Identifier: Not Supported 00:08:41.468 Non-Operational Permissive Mode: Not Supported 00:08:41.468 NVM Sets: Not Supported 00:08:41.468 Read Recovery Levels: Not Supported 00:08:41.468 Endurance Groups: Not Supported 00:08:41.468 Predictable Latency Mode: Not Supported 00:08:41.468 Traffic Based Keep ALive: Not Supported 00:08:41.468 Namespace Granularity: Not Supported 00:08:41.468 SQ Associations: Not Supported 00:08:41.468 UUID List: Not Supported 00:08:41.468 Multi-Domain Subsystem: Not Supported 00:08:41.468 Fixed Capacity Management: Not Supported 00:08:41.468 Variable Capacity Management: Not Supported 00:08:41.468 Delete Endurance Group: Not Supported 00:08:41.468 Delete NVM Set: Not Supported 00:08:41.468 Extended LBA Formats Supported: Supported 00:08:41.468 Flexible Data Placement Supported: Not Supported 00:08:41.468 00:08:41.468 Controller Memory Buffer Support 00:08:41.468 ================================ 00:08:41.468 Supported: No 00:08:41.468 00:08:41.468 Persistent Memory Region Support 00:08:41.468 ================================ 00:08:41.468 Supported: No 00:08:41.468 00:08:41.468 Admin Command Set Attributes 00:08:41.468 ============================ 00:08:41.468 Security Send/Receive: Not Supported 00:08:41.468 Format NVM: Supported 00:08:41.468 Firmware Activate/Download: Not Supported 00:08:41.468 Namespace Management: Supported 00:08:41.468 Device Self-Test: Not Supported 00:08:41.468 Directives: Supported 00:08:41.468 NVMe-MI: Not Supported 00:08:41.468 Virtualization Management: Not Supported 00:08:41.468 Doorbell Buffer Config: Supported 00:08:41.468 Get LBA Status Capability: Not Supported 00:08:41.468 Command & Feature Lockdown Capability: Not Supported 00:08:41.468 Abort Command Limit: 4 00:08:41.468 Async Event Request Limit: 4 00:08:41.468 Number of Firmware Slots: N/A 00:08:41.468 Firmware Slot 1 Read-Only: N/A 00:08:41.468 Firmware Activation Without Reset: N/A 00:08:41.468 Multiple Update Detection Support: N/A 00:08:41.468 Firmware Update Granularity: No Information Provided 00:08:41.468 Per-Namespace SMART Log: Yes 00:08:41.468 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.468 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:41.468 Command Effects Log Page: Supported 00:08:41.468 Get Log Page Extended Data: Supported 00:08:41.468 Telemetry Log Pages: Not Supported 00:08:41.468 Persistent Event Log Pages: Not Supported 00:08:41.468 Supported Log Pages Log Page: May Support 00:08:41.468 Commands Supported & Effects Log Page: Not Supported 00:08:41.468 Feature Identifiers & Effects Log Page:May Support 00:08:41.468 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.468 Data Area 4 for Telemetry Log: Not Supported 00:08:41.468 Error Log Page Entries Supported: 1 00:08:41.468 Keep Alive: Not Supported 00:08:41.468 00:08:41.468 NVM Command Set Attributes 00:08:41.468 ========================== 00:08:41.468 Submission Queue Entry Size 00:08:41.468 Max: 64 00:08:41.468 Min: 64 00:08:41.468 Completion Queue Entry Size 00:08:41.468 Max: 16 00:08:41.468 Min: 16 00:08:41.468 Number of Namespaces: 256 00:08:41.468 Compare Command: Supported 00:08:41.468 Write Uncorrectable Command: Not Supported 00:08:41.468 Dataset Management Command: Supported 00:08:41.468 Write Zeroes Command: Supported 00:08:41.468 Set Features Save Field: Supported 00:08:41.468 Reservations: Not Supported 00:08:41.468 Timestamp: Supported 00:08:41.469 Copy: Supported 00:08:41.469 Volatile Write Cache: Present 00:08:41.469 Atomic Write Unit (Normal): 1 00:08:41.469 Atomic Write Unit (PFail): 1 00:08:41.469 Atomic Compare & Write Unit: 1 00:08:41.469 Fused Compare & Write: Not Supported 00:08:41.469 Scatter-Gather List 00:08:41.469 SGL Command Set: Supported 00:08:41.469 SGL Keyed: Not Supported 00:08:41.469 SGL Bit Bucket Descriptor: Not Supported 00:08:41.469 SGL Metadata Pointer: Not Supported 00:08:41.469 Oversized SGL: Not Supported 00:08:41.469 SGL Metadata Address: Not Supported 00:08:41.469 SGL Offset: Not Supported 00:08:41.469 Transport SGL Data Block: Not Supported 00:08:41.469 Replay Protected Memory Block: Not Supported 00:08:41.469 00:08:41.469 Firmware Slot Information 00:08:41.469 ========================= 00:08:41.469 Active slot: 1 00:08:41.469 Slot 1 Firmware Revision: 1.0 00:08:41.469 00:08:41.469 00:08:41.469 Commands Supported and Effects 00:08:41.469 ============================== 00:08:41.469 Admin Commands 00:08:41.469 -------------- 00:08:41.469 Delete I/O Submission Queue (00h): Supported 00:08:41.469 Create I/O Submission Queue (01h): Supported 00:08:41.469 Get Log Page (02h): Supported 00:08:41.469 Delete I/O Completion Queue (04h): Supported 00:08:41.469 Create I/O Completion Queue (05h): Supported 00:08:41.469 Identify (06h): Supported 00:08:41.469 Abort (08h): Supported 00:08:41.469 Set Features (09h): Supported 00:08:41.469 Get Features (0Ah): Supported 00:08:41.469 Asynchronous Event Request (0Ch): Supported 00:08:41.469 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.469 Directive Send (19h): Supported 00:08:41.469 Directive Receive (1Ah): Supported 00:08:41.469 Virtualization Management (1Ch): Supported 00:08:41.469 Doorbell Buffer Config (7Ch): Supported 00:08:41.469 Format NVM (80h): Supported LBA-Change 00:08:41.469 I/O Commands 00:08:41.469 ------------ 00:08:41.469 Flush (00h): Supported LBA-Change 00:08:41.469 Write (01h): Supported LBA-Change 00:08:41.469 Read (02h): Supported 00:08:41.469 Compare (05h): Supported 00:08:41.469 Write Zeroes (08h): Supported LBA-Change 00:08:41.469 Dataset Management (09h): Supported LBA-Change 00:08:41.469 Unknown (0Ch): Supported 00:08:41.469 Unknown (12h): Supported 00:08:41.469 Copy (19h): Supported LBA-Change 00:08:41.469 Unknown (1Dh): Supported LBA-Change 00:08:41.469 00:08:41.469 Error Log 00:08:41.469 ========= 00:08:41.469 00:08:41.469 Arbitration 00:08:41.469 =========== 00:08:41.469 Arbitration Burst: no limit 00:08:41.469 00:08:41.469 Power Management 00:08:41.469 ================ 00:08:41.469 Number of Power States: 1 00:08:41.469 Current Power State: Power State #0 00:08:41.469 Power State #0: 00:08:41.469 Max Power: 25.00 W 00:08:41.469 Non-Operational State: Operational 00:08:41.469 Entry Latency: 16 microseconds 00:08:41.469 Exit Latency: 4 microseconds 00:08:41.469 Relative Read Throughput: 0 00:08:41.469 Relative Read Latency: 0 00:08:41.469 Relative Write Throughput: 0 00:08:41.469 Relative Write Latency: 0 00:08:41.469 Idle Power: Not Reported 00:08:41.469 Active Power: Not Reported 00:08:41.469 Non-Operational Permissive Mode: Not Supported 00:08:41.469 00:08:41.469 Health Information 00:08:41.469 ================== 00:08:41.469 Critical Warnings: 00:08:41.469 Available Spare Space: OK 00:08:41.469 Temperature: OK 00:08:41.469 Device Reliability: OK 00:08:41.469 Read Only: No 00:08:41.469 Volatile Memory Backup: OK 00:08:41.469 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.469 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.469 Available Spare: 0% 00:08:41.469 Available Spare Threshold: 0% 00:08:41.469 Life Percentage Used: 0% 00:08:41.469 Data Units Read: 1114 00:08:41.469 Data Units Written: 511 00:08:41.469 Host Read Commands: 51782 00:08:41.469 Host Write Commands: 25329 00:08:41.469 Controller Busy Time: 0 minutes 00:08:41.469 Power Cycles: 0 00:08:41.469 Power On Hours: 0 hours 00:08:41.469 Unsafe Shutdowns: 0 00:08:41.469 Unrecoverable Media Errors: 0 00:08:41.469 Lifetime Error Log Entries: 0 00:08:41.469 Warning Temperature Time: 0 minutes 00:08:41.469 Critical Temperature Time: 0 minutes 00:08:41.469 00:08:41.469 Number of Queues 00:08:41.469 ================ 00:08:41.469 Number of I/O Submission Queues: 64 00:08:41.469 Number of I/O Completion Queues: 64 00:08:41.469 00:08:41.469 ZNS Specific Controller Data 00:08:41.469 ============================ 00:08:41.469 Zone Append Size Limit: 0 00:08:41.469 00:08:41.469 00:08:41.469 Active Namespaces 00:08:41.469 ================= 00:08:41.469 Namespace ID:1 00:08:41.469 Error Recovery Timeout: Unlimited 00:08:41.469 Command Set Identifier: NVM (00h) 00:08:41.469 Deallocate: Supported 00:08:41.469 Deallocated/Unwritten Error: Supported 00:08:41.469 Deallocated Read Value: All 0x00 00:08:41.469 Deallocate in Write Zeroes: Not Supported 00:08:41.469 Deallocated Guard Field: 0xFFFF 00:08:41.469 Flush: Supported 00:08:41.469 Reservation: Not Supported 00:08:41.469 Namespace Sharing Capabilities: Private 00:08:41.469 Size (in LBAs): 1310720 (5GiB) 00:08:41.469 Capacity (in LBAs): 1310720 (5GiB) 00:08:41.469 Utilization (in LBAs): 1310720 (5GiB) 00:08:41.469 Thin Provisioning: Not Supported 00:08:41.469 Per-NS Atomic Units: No 00:08:41.469 Maximum Single Source Range Length: 128 00:08:41.469 Maximum Copy Length: 128 00:08:41.469 Maximum Source Range Count: 128 00:08:41.469 NGUID/EUI64 Never Reused: No 00:08:41.469 Namespace Write Protected: No 00:08:41.469 Number of LBA Formats: 8 00:08:41.469 Current LBA Format: LBA Format #04 00:08:41.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.469 00:08:41.469 ===================================================== 00:08:41.469 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:41.469 ===================================================== 00:08:41.469 Controller Capabilities/Features 00:08:41.469 ================================ 00:08:41.469 Vendor ID: 1b36 00:08:41.469 Subsystem Vendor ID: 1af4 00:08:41.469 Serial Number: 12343 00:08:41.469 Model Number: QEMU NVMe Ctrl 00:08:41.469 Firmware Version: 8.0.0 00:08:41.469 Recommended Arb Burst: 6 00:08:41.469 IEEE OUI Identifier: 00 54 52 00:08:41.469 Multi-path I/O 00:08:41.469 May have multiple subsystem ports: No 00:08:41.469 May have multiple controllers: Yes 00:08:41.469 Associated with SR-IOV VF: No 00:08:41.469 Max Data Transfer Size: 524288 00:08:41.469 Max Number of Namespaces: 256 00:08:41.469 Max Number of I/O Queues: 64 00:08:41.469 NVMe Specification Version (VS): 1.4 00:08:41.469 NVMe Specification Version (Identify): 1.4 00:08:41.469 Maximum Queue Entries: 2048 00:08:41.469 Contiguous Queues Required: Yes 00:08:41.469 Arbitration Mechanisms Supported 00:08:41.469 Weighted Round Robin: Not Supported 00:08:41.469 Vendor Specific: Not Supported 00:08:41.469 Reset Timeout: 7500 ms 00:08:41.469 Doorbell Stride: 4 bytes 00:08:41.469 NVM Subsystem Reset: Not Supported 00:08:41.469 Command Sets Supported 00:08:41.469 NVM Command Set: Supported 00:08:41.469 Boot Partition: Not Supported 00:08:41.469 Memory Page Size Minimum: 4096 bytes 00:08:41.469 Memory Page Size Maximum: 65536 bytes 00:08:41.469 Persistent Memory Region: Not Supported 00:08:41.469 Optional Asynchronous Events Supported 00:08:41.469 Namespace Attribute Notices: Supported 00:08:41.469 Firmware Activation Notices: Not Supported 00:08:41.469 ANA Change Notices: Not Supported 00:08:41.469 PLE Aggregate Log Change Notices: Not Supported 00:08:41.469 LBA Status Info Alert Notices: Not Supported 00:08:41.469 EGE Aggregate Log Change Notices: Not Supported 00:08:41.469 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.469 Zone Descriptor Change Notices: Not Supported 00:08:41.469 Discovery Log Change Notices: Not Supported 00:08:41.469 Controller Attributes 00:08:41.469 128-bit Host Identifier: Not Supported 00:08:41.469 Non-Operational Permissive Mode: Not Supported 00:08:41.469 NVM Sets: Not Supported 00:08:41.469 Read Recovery Levels: Not Supported 00:08:41.469 Endurance Groups: Supported 00:08:41.469 Predictable Latency Mode: Not Supported 00:08:41.469 Traffic Based Keep ALive: Not Supported 00:08:41.469 Namespace Granularity: Not Supported 00:08:41.469 SQ Associations: Not Supported 00:08:41.469 UUID List: Not Supported 00:08:41.470 Multi-Domain Subsystem: Not Supported 00:08:41.470 Fixed Capacity Management: Not Supported 00:08:41.470 Variable Capacity Management: Not Supported 00:08:41.470 Delete Endurance Group: Not Supported 00:08:41.470 Delete NVM Set: Not Supported 00:08:41.470 Extended LBA Formats Supported: Supported 00:08:41.470 Flexible Data Placement Supported: Supported 00:08:41.470 00:08:41.470 Controller Memory Buffer Support 00:08:41.470 ================================ 00:08:41.470 Supported: No 00:08:41.470 00:08:41.470 Persistent Memory Region Support 00:08:41.470 ================================ 00:08:41.470 Supported: No 00:08:41.470 00:08:41.470 Admin Command Set Attributes 00:08:41.470 ============================ 00:08:41.470 Security Send/Receive: Not Supported 00:08:41.470 Format NVM: Supported 00:08:41.470 Firmware Activate/Download: Not Supported 00:08:41.470 Namespace Management: Supported 00:08:41.470 Device Self-Test: Not Supported 00:08:41.470 Directives: Supported 00:08:41.470 NVMe-MI: Not Supported 00:08:41.470 Virtualization Management: Not Supported 00:08:41.470 Doorbell Buffer Config: Supported 00:08:41.470 Get LBA Status Capability: Not Supported 00:08:41.470 Command & Feature Lockdown Capability: Not Supported 00:08:41.470 Abort Command Limit: 4 00:08:41.470 Async Event Request Limit: 4 00:08:41.470 Number of Firmware Slots: N/A 00:08:41.470 Firmware Slot 1 Read-Only: N/A 00:08:41.470 Firmware Activation Without Reset: N/A 00:08:41.470 Multiple Update Detection Support: N/A 00:08:41.470 Firmware Update Granularity: No Information Provided 00:08:41.470 Per-Namespace SMART Log: Yes 00:08:41.470 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.470 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:41.470 Command Effects Log Page: Supported 00:08:41.470 Get Log Page Extended Data: Supported 00:08:41.470 Telemetry Log Pages: Not Supported 00:08:41.470 Persistent Event Log Pages: Not Supported 00:08:41.470 Supported Log Pages Log Page: May Support 00:08:41.470 Commands Supported & Effects Log Page: Not Supported 00:08:41.470 Feature Identifiers & Effects Log Page:May Support 00:08:41.470 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.470 Data Area 4 for Telemetry Log: Not Supported 00:08:41.470 Error Log Page Entries Supported: 1 00:08:41.470 Keep Alive: Not Supported 00:08:41.470 00:08:41.470 NVM Command Set Attributes 00:08:41.470 ========================== 00:08:41.470 Submission Queue Entry Size 00:08:41.470 Max: 64 00:08:41.470 Min: 64 00:08:41.470 Completion Queue Entry Size 00:08:41.470 Max: 16 00:08:41.470 Min: 16 00:08:41.470 Number of Namespaces: 256 00:08:41.470 Compare Command: Supported 00:08:41.470 Write Uncorrectable Command: Not Supported 00:08:41.470 Dataset Management Command: Supported 00:08:41.470 Write Zeroes Command: Supported 00:08:41.470 Set Features Save Field: Supported 00:08:41.470 Reservations: Not Supported 00:08:41.470 Timestamp: Supported 00:08:41.470 Copy: Supported 00:08:41.470 Volatile Write Cache: Present 00:08:41.470 Atomic Write Unit (Normal): 1 00:08:41.470 Atomic Write Unit (PFail): 1 00:08:41.470 Atomic Compare & Write Unit: 1 00:08:41.470 Fused Compare & Write: Not Supported 00:08:41.470 Scatter-Gather List 00:08:41.470 SGL Command Set: Supported 00:08:41.470 SGL Keyed: Not Supported 00:08:41.470 SGL Bit Bucket Descriptor: Not Supported 00:08:41.470 SGL Metadata Pointer: Not Supported 00:08:41.470 Oversized SGL: Not Supported 00:08:41.470 SGL Metadata Address: Not Supported 00:08:41.470 SGL Offset: Not Supported 00:08:41.470 Transport SGL Data Block: Not Supported 00:08:41.470 Replay Protected Memory Block: Not Supported 00:08:41.470 00:08:41.470 Firmware Slot Information 00:08:41.470 ========================= 00:08:41.470 Active slot: 1 00:08:41.470 Slot 1 Firmware Revision: 1.0 00:08:41.470 00:08:41.470 00:08:41.470 Commands Supported and Effects 00:08:41.470 ============================== 00:08:41.470 Admin Commands 00:08:41.470 -------------- 00:08:41.470 Delete I/O Submission Queue (00h): Supported 00:08:41.470 Create I/O Submission Queue (01h): Supported 00:08:41.470 Get Log Page (02h): Supported 00:08:41.470 Delete I/O Completion Queue (04h): Supported 00:08:41.470 Create I/O Completion Queue (05h): Supported 00:08:41.470 Identify (06h): Supported 00:08:41.470 Abort (08h): Supported 00:08:41.470 Set Features (09h): Supported 00:08:41.470 Get Features (0Ah): Supported 00:08:41.470 Asynchronous Event Request (0Ch): Supported 00:08:41.470 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.470 Directive Send (19h): Supported 00:08:41.470 Directive Receive (1Ah): Supported 00:08:41.470 Virtualization Management (1Ch): Supported 00:08:41.470 Doorbell Buffer Config (7Ch): Supported 00:08:41.470 Format NVM (80h): Supported LBA-Change 00:08:41.470 I/O Commands 00:08:41.470 ------------ 00:08:41.470 Flush (00h): Supported LBA-Change 00:08:41.470 Write (01h): Supported LBA-Change 00:08:41.470 Read (02h): Supported 00:08:41.470 Compare (05h): Supported 00:08:41.470 Write Zeroes (08h): Supported LBA-Change 00:08:41.470 Dataset Management (09h): Supported LBA-Change 00:08:41.470 Unknown (0Ch): Supported 00:08:41.470 Unknown (12h): Supported 00:08:41.470 Copy (19h): Supported LBA-Change 00:08:41.470 Unknown (1Dh): Supported LBA-Change 00:08:41.470 00:08:41.470 Error Log 00:08:41.470 ========= 00:08:41.470 00:08:41.470 Arbitration 00:08:41.470 =========== 00:08:41.470 Arbitration Burst: no limit 00:08:41.470 00:08:41.470 Power Management 00:08:41.470 ================ 00:08:41.470 Number of Power States: 1 00:08:41.470 Current Power State: Power State #0 00:08:41.470 Power State #0: 00:08:41.470 Max Power: 25.00 W 00:08:41.470 Non-Operational State: Operational 00:08:41.470 Entry Latency: 16 microseconds 00:08:41.470 Exit Latency: 4 microseconds 00:08:41.470 Relative Read Throughput: 0 00:08:41.470 Relative Read Latency: 0 00:08:41.470 Relative Write Throughput: 0 00:08:41.470 Relative Write Latency: 0 00:08:41.470 Idle Power: Not Reported 00:08:41.470 Active Power: Not Reported 00:08:41.470 Non-Operational Permissive Mode: Not Supported 00:08:41.470 00:08:41.470 Health Information 00:08:41.470 ================== 00:08:41.470 Critical Warnings: 00:08:41.470 Available Spare Space: OK 00:08:41.470 Temperature: OK 00:08:41.470 Device Reliability: OK 00:08:41.470 Read Only: No 00:08:41.470 Volatile Memory Backup: OK 00:08:41.470 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.470 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.470 Available Spare: 0% 00:08:41.470 Available Spare Threshold: 0% 00:08:41.470 Life Percentage Used: 0% 00:08:41.470 Data Units Read: 1190 00:08:41.470 Data Units Written: 548 00:08:41.470 Host Read Commands: 52646 00:08:41.470 Host Write Commands: 25754 00:08:41.470 Controller Busy Time: 0 minutes 00:08:41.470 Power Cycles: 0 00:08:41.470 Power On Hours: 0 hours 00:08:41.470 Unsafe Shutdowns: 0 00:08:41.470 Unrecoverable Media Errors: 0 00:08:41.470 Lifetime Error Log Entries: 0 00:08:41.470 Warning Temperature Time: 0 minutes 00:08:41.470 Critical Temperature Time: 0 minutes 00:08:41.470 00:08:41.470 Number of Queues 00:08:41.470 ================ 00:08:41.470 Number of I/O Submission Queues: 64 00:08:41.470 Number of I/O Completion Queues: 64 00:08:41.470 00:08:41.470 ZNS Specific Controller Data 00:08:41.470 ============================ 00:08:41.470 Zone Append Size Limit: 0 00:08:41.470 00:08:41.470 00:08:41.470 Active Namespaces 00:08:41.470 ================= 00:08:41.470 Namespace ID:1 00:08:41.470 Error Recovery Timeout: Unlimited 00:08:41.470 Command Set Identifier: NVM (00h) 00:08:41.470 Deallocate: Supported 00:08:41.470 Deallocated/Unwritten Error: Supported 00:08:41.470 Deallocated Read Value: All 0x00 00:08:41.470 Deallocate in Write Zeroes: Not Supported 00:08:41.470 Deallocated Guard Field: 0xFFFF 00:08:41.470 Flush: Supported 00:08:41.470 Reservation: Not Supported 00:08:41.470 Namespace Sharing Capabilities: Multiple Controllers 00:08:41.470 Size (in LBAs): 262144 (1GiB) 00:08:41.470 Capacity (in LBAs): 262144 (1GiB) 00:08:41.470 Utilization (in LBAs): 262144 (1GiB) 00:08:41.470 Thin Provisioning: Not Supported 00:08:41.470 Per-NS Atomic Units: No 00:08:41.470 Maximum Single Source Range Length: 128 00:08:41.470 Maximum Copy Length: 128 00:08:41.470 Maximum Source Range Count: 128 00:08:41.470 NGUID/EUI64 Never Reused: No 00:08:41.470 Namespace Write Protected: No 00:08:41.470 Endurance group ID: 1 00:08:41.470 Number of LBA Formats: 8 00:08:41.470 Current LBA Format: LBA Format #04 00:08:41.471 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.471 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.471 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.471 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.471 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.471 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.471 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.471 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.471 00:08:41.471 Get Feature FDP: 00:08:41.471 ================ 00:08:41.471 Enabled: Yes 00:08:41.471 FDP configuration index: 0 00:08:41.471 00:08:41.471 FDP configurations log page 00:08:41.471 =========================== 00:08:41.471 Number of FDP configurations: 1 00:08:41.471 Version: 0 00:08:41.471 Size: 112 00:08:41.471 FDP Configuration Descriptor: 0 00:08:41.471 Descriptor Size: 96 00:08:41.471 Reclaim Group Identifier format: 2 00:08:41.471 FDP Volatile Write Cache: Not Present 00:08:41.471 FDP Configuration: Valid 00:08:41.471 Vendor Specific Size: 0 00:08:41.471 Number of Reclaim Groups: 2 00:08:41.471 Number of Recalim Unit Handles: 8 00:08:41.471 Max Placement Identifiers: 128 00:08:41.471 Number of Namespaces Suppprted: 256 00:08:41.471 Reclaim unit Nominal Size: 6000000 bytes 00:08:41.471 Estimated Reclaim Unit Time Limit: Not Reported 00:08:41.471 RUH Desc #000: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #001: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #002: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #003: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #004: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #005: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #006: RUH Type: Initially Isolated 00:08:41.471 RUH Desc #007: RUH Type: Initially Isolated 00:08:41.471 00:08:41.471 FDP reclaim unit handle usage log page 00:08:41.471 ====================================== 00:08:41.471 Number of Reclaim Unit Handles: 8 00:08:41.471 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:41.471 RUH Usage Desc #001: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #002: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #003: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #004: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #005: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #006: RUH Attributes: Unused 00:08:41.471 RUH Usage Desc #007: RUH Attributes: Unused 00:08:41.471 00:08:41.471 FDP statistics log page 00:08:41.471 ======================= 00:08:41.471 Host bytes with metadata written: 371154944 00:08:41.471 Media bytes with metadata written: 371302400 00:08:41.471 Media bytes erased: 0 00:08:41.471 00:08:41.471 FDP events log page 00:08:41.471 =================== 00:08:41.471 Number of FDP events: 0 00:08:41.471 00:08:41.471 ===================================================== 00:08:41.471 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:41.471 ===================================================== 00:08:41.471 Controller Capabilities/Features 00:08:41.471 ================================ 00:08:41.471 Vendor ID: 1b36 00:08:41.471 Subsystem Vendor ID: 1af4 00:08:41.471 Serial Number: 12342 00:08:41.471 Model Number: QEMU NVMe Ctrl 00:08:41.471 Firmware Version: 8.0.0 00:08:41.471 Recommended Arb Burst: 6 00:08:41.471 IEEE OUI Identifier: 00 54 52 00:08:41.471 Multi-path I/O 00:08:41.471 May have multiple subsystem ports: No 00:08:41.471 May have multiple controllers: No 00:08:41.471 Associated with SR-IOV VF: No 00:08:41.471 Max Data Transfer Size: 524288 00:08:41.471 Max Number of Namespaces: 256 00:08:41.471 Max Number of I/O Queues: 64 00:08:41.471 NVMe Specification Version (VS): 1.4 00:08:41.471 NVMe Specification Version (Identify): 1.4 00:08:41.471 Maximum Queue Entries: 2048 00:08:41.471 Contiguous Queues Required: Yes 00:08:41.471 Arbitration Mechanisms Supported 00:08:41.471 Weighted Round Robin: Not Supported 00:08:41.471 Vendor Specific: Not Supported 00:08:41.471 Reset Timeout: 7500 ms 00:08:41.471 Doorbell Stride: 4 bytes 00:08:41.471 NVM Subsystem Reset: Not Supported 00:08:41.471 Command Sets Supported 00:08:41.471 NVM Command Set: Supported 00:08:41.471 Boot Partition: Not Supported 00:08:41.471 Memory Page Size Minimum: 4096 bytes 00:08:41.471 Memory Page Size Maximum: 65536 bytes 00:08:41.471 Persistent Memory Region: Not Supported 00:08:41.471 Optional Asynchronous Events Supported 00:08:41.471 Namespace Attribute Notices: Supported 00:08:41.471 Firmware Activation Notices: Not Supported 00:08:41.471 ANA Change Notices: Not Supported 00:08:41.471 PLE Aggregate Log Change Notices: Not Supported 00:08:41.471 LBA Status Info Alert Notices: Not Supported 00:08:41.471 EGE Aggregate Log Change Notices: Not Supported 00:08:41.471 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.471 Zone Descriptor Change Notices: Not Supported 00:08:41.471 Discovery Log Change Notices: Not Supported 00:08:41.471 Controller Attributes 00:08:41.471 128-bit Host Identifier: Not Supported 00:08:41.471 Non-Operational Permissive Mode: Not Supported 00:08:41.471 NVM Sets: Not Supported 00:08:41.471 Read Recovery Levels: Not Supported 00:08:41.471 Endurance Groups: Not Supported 00:08:41.471 Predictable Latency Mode: Not Supported 00:08:41.471 Traffic Based Keep ALive: Not Supported 00:08:41.471 Namespace Granularity: Not Supported 00:08:41.471 SQ Associations: Not Supported 00:08:41.471 UUID List: Not Supported 00:08:41.471 Multi-Domain Subsystem: Not Supported 00:08:41.471 Fixed Capacity Management: Not Supported 00:08:41.471 Variable Capacity Management: Not Supported 00:08:41.471 Delete Endurance Group: Not Supported 00:08:41.471 Delete NVM Set: Not Supported 00:08:41.471 Extended LBA Formats Supported: Supported 00:08:41.471 Flexible Data Placement Supported: Not Supported 00:08:41.471 00:08:41.471 Controller Memory Buffer Support 00:08:41.471 ================================ 00:08:41.471 Supported: No 00:08:41.471 00:08:41.471 Persistent Memory Region Support 00:08:41.471 ================================ 00:08:41.471 Supported: No 00:08:41.471 00:08:41.471 Admin Command Set Attributes 00:08:41.471 ============================ 00:08:41.471 Security Send/Receive: Not Supported 00:08:41.471 Format NVM: Supported 00:08:41.471 Firmware Activate/Download: Not Supported 00:08:41.471 Namespace Management: Supported 00:08:41.471 Device Self-Test: Not Supported 00:08:41.471 Directives: Supported 00:08:41.471 NVMe-MI: Not Supported 00:08:41.471 Virtualization Management: Not Supported 00:08:41.471 Doorbell Buffer Config: Supported 00:08:41.471 Get LBA Status Capability: Not Supported 00:08:41.471 Command & Feature Lockdown Capability: Not Supported 00:08:41.471 Abort Command Limit: 4 00:08:41.471 Async Event Request Limit: 4 00:08:41.471 Number of Firmware Slots: N/A 00:08:41.471 Firmware Slot 1 Read-Only: N/A 00:08:41.471 Firmware Activation Without Reset: N/A 00:08:41.471 Multiple Update Detection Support: N/A 00:08:41.471 Firmware Update Granularity: No Information Provided 00:08:41.471 Per-Namespace SMART Log: Yes 00:08:41.471 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.471 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:41.471 Command Effects Log Page: Supported 00:08:41.471 Get Log Page Extended Data: Supported 00:08:41.471 Telemetry Log Pages: Not Supported 00:08:41.471 Persistent Event Log Pages: Not Supported 00:08:41.472 Supported Log Pages Log Page: May Support 00:08:41.472 Commands Supported & Effects Log Page: Not Supported 00:08:41.472 Feature Identifiers & Effects Log Page:May Support 00:08:41.472 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.472 Data Area 4 for Telemetry Log: Not Supported 00:08:41.472 Error Log Page Entries Supported: 1 00:08:41.472 Keep Alive: Not Supported 00:08:41.472 00:08:41.472 NVM Command Set Attributes 00:08:41.472 ========================== 00:08:41.472 Submission Queue Entry Size 00:08:41.472 Max: 64 00:08:41.472 Min: 64 00:08:41.472 Completion Queue Entry Size 00:08:41.472 Max: 16 00:08:41.472 Min: 16 00:08:41.472 Number of Namespaces: 256 00:08:41.472 Compare Command: Supported 00:08:41.472 Write Uncorrectable Command: Not Supported 00:08:41.472 Dataset Management Command: Supported 00:08:41.472 Write Zeroes Command: Supported 00:08:41.472 Set Features Save Field: Supported 00:08:41.472 Reservations: Not Supported 00:08:41.472 Timestamp: Supported 00:08:41.472 Copy: Supported 00:08:41.472 Volatile Write Cache: Present 00:08:41.472 Atomic Write Unit (Normal): 1 00:08:41.472 Atomic Write Unit (PFail): 1 00:08:41.472 Atomic Compare & Write Unit: 1 00:08:41.472 Fused Compare & Write: Not Supported 00:08:41.472 Scatter-Gather List 00:08:41.472 SGL Command Set: Supported 00:08:41.472 SGL Keyed: Not Supported 00:08:41.472 SGL Bit Bucket Descriptor: Not Supported 00:08:41.472 SGL Metadata Pointer: Not Supported 00:08:41.472 Oversized SGL: Not Supported 00:08:41.472 SGL Metadata Address: Not Supported 00:08:41.472 SGL Offset: Not Supported 00:08:41.472 Transport SGL Data Block: Not Supported 00:08:41.472 Replay Protected Memory Block: Not Supported 00:08:41.472 00:08:41.472 Firmware Slot Information 00:08:41.472 =======================[2024-11-26 04:04:43.145568] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 75038 terminated unexpected 00:08:41.472 [2024-11-26 04:04:43.146496] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 75038 terminated unexpected 00:08:41.472 [2024-11-26 04:04:43.147093] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 75038 terminated unexpected 00:08:41.472 [2024-11-26 04:04:43.148044] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 75038 terminated unexpected 00:08:41.472 == 00:08:41.472 Active slot: 1 00:08:41.472 Slot 1 Firmware Revision: 1.0 00:08:41.472 00:08:41.472 00:08:41.472 Commands Supported and Effects 00:08:41.472 ============================== 00:08:41.472 Admin Commands 00:08:41.472 -------------- 00:08:41.472 Delete I/O Submission Queue (00h): Supported 00:08:41.472 Create I/O Submission Queue (01h): Supported 00:08:41.472 Get Log Page (02h): Supported 00:08:41.472 Delete I/O Completion Queue (04h): Supported 00:08:41.472 Create I/O Completion Queue (05h): Supported 00:08:41.472 Identify (06h): Supported 00:08:41.472 Abort (08h): Supported 00:08:41.472 Set Features (09h): Supported 00:08:41.472 Get Features (0Ah): Supported 00:08:41.472 Asynchronous Event Request (0Ch): Supported 00:08:41.472 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.472 Directive Send (19h): Supported 00:08:41.472 Directive Receive (1Ah): Supported 00:08:41.472 Virtualization Management (1Ch): Supported 00:08:41.472 Doorbell Buffer Config (7Ch): Supported 00:08:41.472 Format NVM (80h): Supported LBA-Change 00:08:41.472 I/O Commands 00:08:41.472 ------------ 00:08:41.472 Flush (00h): Supported LBA-Change 00:08:41.472 Write (01h): Supported LBA-Change 00:08:41.472 Read (02h): Supported 00:08:41.472 Compare (05h): Supported 00:08:41.472 Write Zeroes (08h): Supported LBA-Change 00:08:41.472 Dataset Management (09h): Supported LBA-Change 00:08:41.472 Unknown (0Ch): Supported 00:08:41.472 Unknown (12h): Supported 00:08:41.472 Copy (19h): Supported LBA-Change 00:08:41.472 Unknown (1Dh): Supported LBA-Change 00:08:41.472 00:08:41.472 Error Log 00:08:41.472 ========= 00:08:41.472 00:08:41.472 Arbitration 00:08:41.472 =========== 00:08:41.472 Arbitration Burst: no limit 00:08:41.472 00:08:41.472 Power Management 00:08:41.472 ================ 00:08:41.472 Number of Power States: 1 00:08:41.472 Current Power State: Power State #0 00:08:41.472 Power State #0: 00:08:41.472 Max Power: 25.00 W 00:08:41.472 Non-Operational State: Operational 00:08:41.472 Entry Latency: 16 microseconds 00:08:41.472 Exit Latency: 4 microseconds 00:08:41.472 Relative Read Throughput: 0 00:08:41.472 Relative Read Latency: 0 00:08:41.472 Relative Write Throughput: 0 00:08:41.472 Relative Write Latency: 0 00:08:41.472 Idle Power: Not Reported 00:08:41.472 Active Power: Not Reported 00:08:41.472 Non-Operational Permissive Mode: Not Supported 00:08:41.472 00:08:41.472 Health Information 00:08:41.472 ================== 00:08:41.472 Critical Warnings: 00:08:41.472 Available Spare Space: OK 00:08:41.472 Temperature: OK 00:08:41.472 Device Reliability: OK 00:08:41.472 Read Only: No 00:08:41.472 Volatile Memory Backup: OK 00:08:41.472 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.472 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.472 Available Spare: 0% 00:08:41.472 Available Spare Threshold: 0% 00:08:41.472 Life Percentage Used: 0% 00:08:41.472 Data Units Read: 3428 00:08:41.472 Data Units Written: 1571 00:08:41.472 Host Read Commands: 156829 00:08:41.472 Host Write Commands: 76578 00:08:41.472 Controller Busy Time: 0 minutes 00:08:41.472 Power Cycles: 0 00:08:41.472 Power On Hours: 0 hours 00:08:41.472 Unsafe Shutdowns: 0 00:08:41.472 Unrecoverable Media Errors: 0 00:08:41.472 Lifetime Error Log Entries: 0 00:08:41.472 Warning Temperature Time: 0 minutes 00:08:41.472 Critical Temperature Time: 0 minutes 00:08:41.472 00:08:41.472 Number of Queues 00:08:41.472 ================ 00:08:41.472 Number of I/O Submission Queues: 64 00:08:41.472 Number of I/O Completion Queues: 64 00:08:41.472 00:08:41.472 ZNS Specific Controller Data 00:08:41.472 ============================ 00:08:41.472 Zone Append Size Limit: 0 00:08:41.472 00:08:41.472 00:08:41.472 Active Namespaces 00:08:41.472 ================= 00:08:41.472 Namespace ID:1 00:08:41.472 Error Recovery Timeout: Unlimited 00:08:41.472 Command Set Identifier: NVM (00h) 00:08:41.472 Deallocate: Supported 00:08:41.472 Deallocated/Unwritten Error: Supported 00:08:41.472 Deallocated Read Value: All 0x00 00:08:41.472 Deallocate in Write Zeroes: Not Supported 00:08:41.472 Deallocated Guard Field: 0xFFFF 00:08:41.472 Flush: Supported 00:08:41.472 Reservation: Not Supported 00:08:41.472 Namespace Sharing Capabilities: Private 00:08:41.472 Size (in LBAs): 1048576 (4GiB) 00:08:41.472 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.472 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.472 Thin Provisioning: Not Supported 00:08:41.472 Per-NS Atomic Units: No 00:08:41.472 Maximum Single Source Range Length: 128 00:08:41.472 Maximum Copy Length: 128 00:08:41.472 Maximum Source Range Count: 128 00:08:41.472 NGUID/EUI64 Never Reused: No 00:08:41.472 Namespace Write Protected: No 00:08:41.472 Number of LBA Formats: 8 00:08:41.472 Current LBA Format: LBA Format #04 00:08:41.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.472 00:08:41.472 Namespace ID:2 00:08:41.472 Error Recovery Timeout: Unlimited 00:08:41.472 Command Set Identifier: NVM (00h) 00:08:41.472 Deallocate: Supported 00:08:41.472 Deallocated/Unwritten Error: Supported 00:08:41.472 Deallocated Read Value: All 0x00 00:08:41.472 Deallocate in Write Zeroes: Not Supported 00:08:41.472 Deallocated Guard Field: 0xFFFF 00:08:41.472 Flush: Supported 00:08:41.472 Reservation: Not Supported 00:08:41.472 Namespace Sharing Capabilities: Private 00:08:41.472 Size (in LBAs): 1048576 (4GiB) 00:08:41.472 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.472 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.472 Thin Provisioning: Not Supported 00:08:41.472 Per-NS Atomic Units: No 00:08:41.472 Maximum Single Source Range Length: 128 00:08:41.472 Maximum Copy Length: 128 00:08:41.472 Maximum Source Range Count: 128 00:08:41.472 NGUID/EUI64 Never Reused: No 00:08:41.472 Namespace Write Protected: No 00:08:41.472 Number of LBA Formats: 8 00:08:41.472 Current LBA Format: LBA Format #04 00:08:41.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.473 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.473 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.473 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.473 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.473 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.473 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.473 00:08:41.473 Namespace ID:3 00:08:41.473 Error Recovery Timeout: Unlimited 00:08:41.473 Command Set Identifier: NVM (00h) 00:08:41.473 Deallocate: Supported 00:08:41.473 Deallocated/Unwritten Error: Supported 00:08:41.473 Deallocated Read Value: All 0x00 00:08:41.473 Deallocate in Write Zeroes: Not Supported 00:08:41.473 Deallocated Guard Field: 0xFFFF 00:08:41.473 Flush: Supported 00:08:41.473 Reservation: Not Supported 00:08:41.473 Namespace Sharing Capabilities: Private 00:08:41.473 Size (in LBAs): 1048576 (4GiB) 00:08:41.473 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.473 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.473 Thin Provisioning: Not Supported 00:08:41.473 Per-NS Atomic Units: No 00:08:41.473 Maximum Single Source Range Length: 128 00:08:41.473 Maximum Copy Length: 128 00:08:41.473 Maximum Source Range Count: 128 00:08:41.473 NGUID/EUI64 Never Reused: No 00:08:41.473 Namespace Write Protected: No 00:08:41.473 Number of LBA Formats: 8 00:08:41.473 Current LBA Format: LBA Format #04 00:08:41.473 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.473 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.473 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.473 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.473 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.473 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.473 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.473 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.473 00:08:41.473 04:04:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.473 04:04:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:08:41.735 ===================================================== 00:08:41.735 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:41.735 ===================================================== 00:08:41.735 Controller Capabilities/Features 00:08:41.735 ================================ 00:08:41.735 Vendor ID: 1b36 00:08:41.735 Subsystem Vendor ID: 1af4 00:08:41.735 Serial Number: 12340 00:08:41.735 Model Number: QEMU NVMe Ctrl 00:08:41.735 Firmware Version: 8.0.0 00:08:41.735 Recommended Arb Burst: 6 00:08:41.735 IEEE OUI Identifier: 00 54 52 00:08:41.735 Multi-path I/O 00:08:41.735 May have multiple subsystem ports: No 00:08:41.735 May have multiple controllers: No 00:08:41.735 Associated with SR-IOV VF: No 00:08:41.735 Max Data Transfer Size: 524288 00:08:41.735 Max Number of Namespaces: 256 00:08:41.735 Max Number of I/O Queues: 64 00:08:41.735 NVMe Specification Version (VS): 1.4 00:08:41.735 NVMe Specification Version (Identify): 1.4 00:08:41.735 Maximum Queue Entries: 2048 00:08:41.735 Contiguous Queues Required: Yes 00:08:41.735 Arbitration Mechanisms Supported 00:08:41.735 Weighted Round Robin: Not Supported 00:08:41.735 Vendor Specific: Not Supported 00:08:41.735 Reset Timeout: 7500 ms 00:08:41.735 Doorbell Stride: 4 bytes 00:08:41.735 NVM Subsystem Reset: Not Supported 00:08:41.735 Command Sets Supported 00:08:41.735 NVM Command Set: Supported 00:08:41.735 Boot Partition: Not Supported 00:08:41.735 Memory Page Size Minimum: 4096 bytes 00:08:41.735 Memory Page Size Maximum: 65536 bytes 00:08:41.735 Persistent Memory Region: Not Supported 00:08:41.735 Optional Asynchronous Events Supported 00:08:41.735 Namespace Attribute Notices: Supported 00:08:41.735 Firmware Activation Notices: Not Supported 00:08:41.735 ANA Change Notices: Not Supported 00:08:41.735 PLE Aggregate Log Change Notices: Not Supported 00:08:41.735 LBA Status Info Alert Notices: Not Supported 00:08:41.735 EGE Aggregate Log Change Notices: Not Supported 00:08:41.735 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.735 Zone Descriptor Change Notices: Not Supported 00:08:41.735 Discovery Log Change Notices: Not Supported 00:08:41.735 Controller Attributes 00:08:41.735 128-bit Host Identifier: Not Supported 00:08:41.735 Non-Operational Permissive Mode: Not Supported 00:08:41.735 NVM Sets: Not Supported 00:08:41.735 Read Recovery Levels: Not Supported 00:08:41.735 Endurance Groups: Not Supported 00:08:41.735 Predictable Latency Mode: Not Supported 00:08:41.735 Traffic Based Keep ALive: Not Supported 00:08:41.735 Namespace Granularity: Not Supported 00:08:41.735 SQ Associations: Not Supported 00:08:41.735 UUID List: Not Supported 00:08:41.735 Multi-Domain Subsystem: Not Supported 00:08:41.735 Fixed Capacity Management: Not Supported 00:08:41.735 Variable Capacity Management: Not Supported 00:08:41.735 Delete Endurance Group: Not Supported 00:08:41.735 Delete NVM Set: Not Supported 00:08:41.735 Extended LBA Formats Supported: Supported 00:08:41.735 Flexible Data Placement Supported: Not Supported 00:08:41.735 00:08:41.735 Controller Memory Buffer Support 00:08:41.735 ================================ 00:08:41.735 Supported: No 00:08:41.735 00:08:41.735 Persistent Memory Region Support 00:08:41.735 ================================ 00:08:41.735 Supported: No 00:08:41.735 00:08:41.735 Admin Command Set Attributes 00:08:41.735 ============================ 00:08:41.735 Security Send/Receive: Not Supported 00:08:41.735 Format NVM: Supported 00:08:41.735 Firmware Activate/Download: Not Supported 00:08:41.735 Namespace Management: Supported 00:08:41.735 Device Self-Test: Not Supported 00:08:41.735 Directives: Supported 00:08:41.735 NVMe-MI: Not Supported 00:08:41.735 Virtualization Management: Not Supported 00:08:41.735 Doorbell Buffer Config: Supported 00:08:41.735 Get LBA Status Capability: Not Supported 00:08:41.735 Command & Feature Lockdown Capability: Not Supported 00:08:41.735 Abort Command Limit: 4 00:08:41.735 Async Event Request Limit: 4 00:08:41.735 Number of Firmware Slots: N/A 00:08:41.735 Firmware Slot 1 Read-Only: N/A 00:08:41.735 Firmware Activation Without Reset: N/A 00:08:41.735 Multiple Update Detection Support: N/A 00:08:41.735 Firmware Update Granularity: No Information Provided 00:08:41.735 Per-Namespace SMART Log: Yes 00:08:41.735 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.735 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:41.735 Command Effects Log Page: Supported 00:08:41.735 Get Log Page Extended Data: Supported 00:08:41.735 Telemetry Log Pages: Not Supported 00:08:41.735 Persistent Event Log Pages: Not Supported 00:08:41.735 Supported Log Pages Log Page: May Support 00:08:41.735 Commands Supported & Effects Log Page: Not Supported 00:08:41.735 Feature Identifiers & Effects Log Page:May Support 00:08:41.735 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.735 Data Area 4 for Telemetry Log: Not Supported 00:08:41.735 Error Log Page Entries Supported: 1 00:08:41.735 Keep Alive: Not Supported 00:08:41.735 00:08:41.735 NVM Command Set Attributes 00:08:41.735 ========================== 00:08:41.735 Submission Queue Entry Size 00:08:41.735 Max: 64 00:08:41.735 Min: 64 00:08:41.735 Completion Queue Entry Size 00:08:41.735 Max: 16 00:08:41.735 Min: 16 00:08:41.735 Number of Namespaces: 256 00:08:41.735 Compare Command: Supported 00:08:41.735 Write Uncorrectable Command: Not Supported 00:08:41.735 Dataset Management Command: Supported 00:08:41.735 Write Zeroes Command: Supported 00:08:41.735 Set Features Save Field: Supported 00:08:41.735 Reservations: Not Supported 00:08:41.735 Timestamp: Supported 00:08:41.735 Copy: Supported 00:08:41.735 Volatile Write Cache: Present 00:08:41.735 Atomic Write Unit (Normal): 1 00:08:41.735 Atomic Write Unit (PFail): 1 00:08:41.735 Atomic Compare & Write Unit: 1 00:08:41.735 Fused Compare & Write: Not Supported 00:08:41.735 Scatter-Gather List 00:08:41.735 SGL Command Set: Supported 00:08:41.735 SGL Keyed: Not Supported 00:08:41.735 SGL Bit Bucket Descriptor: Not Supported 00:08:41.735 SGL Metadata Pointer: Not Supported 00:08:41.735 Oversized SGL: Not Supported 00:08:41.735 SGL Metadata Address: Not Supported 00:08:41.735 SGL Offset: Not Supported 00:08:41.735 Transport SGL Data Block: Not Supported 00:08:41.735 Replay Protected Memory Block: Not Supported 00:08:41.735 00:08:41.735 Firmware Slot Information 00:08:41.735 ========================= 00:08:41.735 Active slot: 1 00:08:41.735 Slot 1 Firmware Revision: 1.0 00:08:41.735 00:08:41.735 00:08:41.735 Commands Supported and Effects 00:08:41.735 ============================== 00:08:41.735 Admin Commands 00:08:41.735 -------------- 00:08:41.735 Delete I/O Submission Queue (00h): Supported 00:08:41.735 Create I/O Submission Queue (01h): Supported 00:08:41.735 Get Log Page (02h): Supported 00:08:41.735 Delete I/O Completion Queue (04h): Supported 00:08:41.735 Create I/O Completion Queue (05h): Supported 00:08:41.735 Identify (06h): Supported 00:08:41.735 Abort (08h): Supported 00:08:41.735 Set Features (09h): Supported 00:08:41.735 Get Features (0Ah): Supported 00:08:41.735 Asynchronous Event Request (0Ch): Supported 00:08:41.735 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.735 Directive Send (19h): Supported 00:08:41.735 Directive Receive (1Ah): Supported 00:08:41.735 Virtualization Management (1Ch): Supported 00:08:41.735 Doorbell Buffer Config (7Ch): Supported 00:08:41.735 Format NVM (80h): Supported LBA-Change 00:08:41.735 I/O Commands 00:08:41.735 ------------ 00:08:41.735 Flush (00h): Supported LBA-Change 00:08:41.735 Write (01h): Supported LBA-Change 00:08:41.735 Read (02h): Supported 00:08:41.735 Compare (05h): Supported 00:08:41.735 Write Zeroes (08h): Supported LBA-Change 00:08:41.735 Dataset Management (09h): Supported LBA-Change 00:08:41.735 Unknown (0Ch): Supported 00:08:41.735 Unknown (12h): Supported 00:08:41.735 Copy (19h): Supported LBA-Change 00:08:41.735 Unknown (1Dh): Supported LBA-Change 00:08:41.735 00:08:41.735 Error Log 00:08:41.735 ========= 00:08:41.735 00:08:41.735 Arbitration 00:08:41.735 =========== 00:08:41.735 Arbitration Burst: no limit 00:08:41.735 00:08:41.736 Power Management 00:08:41.736 ================ 00:08:41.736 Number of Power States: 1 00:08:41.736 Current Power State: Power State #0 00:08:41.736 Power State #0: 00:08:41.736 Max Power: 25.00 W 00:08:41.736 Non-Operational State: Operational 00:08:41.736 Entry Latency: 16 microseconds 00:08:41.736 Exit Latency: 4 microseconds 00:08:41.736 Relative Read Throughput: 0 00:08:41.736 Relative Read Latency: 0 00:08:41.736 Relative Write Throughput: 0 00:08:41.736 Relative Write Latency: 0 00:08:41.736 Idle Power: Not Reported 00:08:41.736 Active Power: Not Reported 00:08:41.736 Non-Operational Permissive Mode: Not Supported 00:08:41.736 00:08:41.736 Health Information 00:08:41.736 ================== 00:08:41.736 Critical Warnings: 00:08:41.736 Available Spare Space: OK 00:08:41.736 Temperature: OK 00:08:41.736 Device Reliability: OK 00:08:41.736 Read Only: No 00:08:41.736 Volatile Memory Backup: OK 00:08:41.736 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.736 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.736 Available Spare: 0% 00:08:41.736 Available Spare Threshold: 0% 00:08:41.736 Life Percentage Used: 0% 00:08:41.736 Data Units Read: 1627 00:08:41.736 Data Units Written: 746 00:08:41.736 Host Read Commands: 74008 00:08:41.736 Host Write Commands: 36645 00:08:41.736 Controller Busy Time: 0 minutes 00:08:41.736 Power Cycles: 0 00:08:41.736 Power On Hours: 0 hours 00:08:41.736 Unsafe Shutdowns: 0 00:08:41.736 Unrecoverable Media Errors: 0 00:08:41.736 Lifetime Error Log Entries: 0 00:08:41.736 Warning Temperature Time: 0 minutes 00:08:41.736 Critical Temperature Time: 0 minutes 00:08:41.736 00:08:41.736 Number of Queues 00:08:41.736 ================ 00:08:41.736 Number of I/O Submission Queues: 64 00:08:41.736 Number of I/O Completion Queues: 64 00:08:41.736 00:08:41.736 ZNS Specific Controller Data 00:08:41.736 ============================ 00:08:41.736 Zone Append Size Limit: 0 00:08:41.736 00:08:41.736 00:08:41.736 Active Namespaces 00:08:41.736 ================= 00:08:41.736 Namespace ID:1 00:08:41.736 Error Recovery Timeout: Unlimited 00:08:41.736 Command Set Identifier: NVM (00h) 00:08:41.736 Deallocate: Supported 00:08:41.736 Deallocated/Unwritten Error: Supported 00:08:41.736 Deallocated Read Value: All 0x00 00:08:41.736 Deallocate in Write Zeroes: Not Supported 00:08:41.736 Deallocated Guard Field: 0xFFFF 00:08:41.736 Flush: Supported 00:08:41.736 Reservation: Not Supported 00:08:41.736 Metadata Transferred as: Separate Metadata Buffer 00:08:41.736 Namespace Sharing Capabilities: Private 00:08:41.736 Size (in LBAs): 1548666 (5GiB) 00:08:41.736 Capacity (in LBAs): 1548666 (5GiB) 00:08:41.736 Utilization (in LBAs): 1548666 (5GiB) 00:08:41.736 Thin Provisioning: Not Supported 00:08:41.736 Per-NS Atomic Units: No 00:08:41.736 Maximum Single Source Range Length: 128 00:08:41.736 Maximum Copy Length: 128 00:08:41.736 Maximum Source Range Count: 128 00:08:41.736 NGUID/EUI64 Never Reused: No 00:08:41.736 Namespace Write Protected: No 00:08:41.736 Number of LBA Formats: 8 00:08:41.736 Current LBA Format: LBA Format #07 00:08:41.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.736 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.736 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.736 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.736 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.736 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.736 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.736 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.736 00:08:41.736 04:04:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.736 04:04:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:08:41.997 ===================================================== 00:08:41.997 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:41.997 ===================================================== 00:08:41.997 Controller Capabilities/Features 00:08:41.997 ================================ 00:08:41.997 Vendor ID: 1b36 00:08:41.997 Subsystem Vendor ID: 1af4 00:08:41.997 Serial Number: 12341 00:08:41.997 Model Number: QEMU NVMe Ctrl 00:08:41.997 Firmware Version: 8.0.0 00:08:41.997 Recommended Arb Burst: 6 00:08:41.997 IEEE OUI Identifier: 00 54 52 00:08:41.997 Multi-path I/O 00:08:41.997 May have multiple subsystem ports: No 00:08:41.997 May have multiple controllers: No 00:08:41.997 Associated with SR-IOV VF: No 00:08:41.997 Max Data Transfer Size: 524288 00:08:41.997 Max Number of Namespaces: 256 00:08:41.997 Max Number of I/O Queues: 64 00:08:41.997 NVMe Specification Version (VS): 1.4 00:08:41.997 NVMe Specification Version (Identify): 1.4 00:08:41.997 Maximum Queue Entries: 2048 00:08:41.997 Contiguous Queues Required: Yes 00:08:41.997 Arbitration Mechanisms Supported 00:08:41.997 Weighted Round Robin: Not Supported 00:08:41.997 Vendor Specific: Not Supported 00:08:41.997 Reset Timeout: 7500 ms 00:08:41.997 Doorbell Stride: 4 bytes 00:08:41.997 NVM Subsystem Reset: Not Supported 00:08:41.997 Command Sets Supported 00:08:41.997 NVM Command Set: Supported 00:08:41.997 Boot Partition: Not Supported 00:08:41.997 Memory Page Size Minimum: 4096 bytes 00:08:41.997 Memory Page Size Maximum: 65536 bytes 00:08:41.997 Persistent Memory Region: Not Supported 00:08:41.997 Optional Asynchronous Events Supported 00:08:41.997 Namespace Attribute Notices: Supported 00:08:41.997 Firmware Activation Notices: Not Supported 00:08:41.997 ANA Change Notices: Not Supported 00:08:41.997 PLE Aggregate Log Change Notices: Not Supported 00:08:41.997 LBA Status Info Alert Notices: Not Supported 00:08:41.997 EGE Aggregate Log Change Notices: Not Supported 00:08:41.997 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.997 Zone Descriptor Change Notices: Not Supported 00:08:41.997 Discovery Log Change Notices: Not Supported 00:08:41.997 Controller Attributes 00:08:41.997 128-bit Host Identifier: Not Supported 00:08:41.997 Non-Operational Permissive Mode: Not Supported 00:08:41.997 NVM Sets: Not Supported 00:08:41.997 Read Recovery Levels: Not Supported 00:08:41.997 Endurance Groups: Not Supported 00:08:41.997 Predictable Latency Mode: Not Supported 00:08:41.997 Traffic Based Keep ALive: Not Supported 00:08:41.997 Namespace Granularity: Not Supported 00:08:41.997 SQ Associations: Not Supported 00:08:41.997 UUID List: Not Supported 00:08:41.997 Multi-Domain Subsystem: Not Supported 00:08:41.997 Fixed Capacity Management: Not Supported 00:08:41.997 Variable Capacity Management: Not Supported 00:08:41.997 Delete Endurance Group: Not Supported 00:08:41.997 Delete NVM Set: Not Supported 00:08:41.997 Extended LBA Formats Supported: Supported 00:08:41.997 Flexible Data Placement Supported: Not Supported 00:08:41.997 00:08:41.997 Controller Memory Buffer Support 00:08:41.997 ================================ 00:08:41.997 Supported: No 00:08:41.997 00:08:41.997 Persistent Memory Region Support 00:08:41.997 ================================ 00:08:41.997 Supported: No 00:08:41.997 00:08:41.997 Admin Command Set Attributes 00:08:41.997 ============================ 00:08:41.997 Security Send/Receive: Not Supported 00:08:41.997 Format NVM: Supported 00:08:41.997 Firmware Activate/Download: Not Supported 00:08:41.997 Namespace Management: Supported 00:08:41.997 Device Self-Test: Not Supported 00:08:41.997 Directives: Supported 00:08:41.997 NVMe-MI: Not Supported 00:08:41.997 Virtualization Management: Not Supported 00:08:41.997 Doorbell Buffer Config: Supported 00:08:41.997 Get LBA Status Capability: Not Supported 00:08:41.997 Command & Feature Lockdown Capability: Not Supported 00:08:41.997 Abort Command Limit: 4 00:08:41.997 Async Event Request Limit: 4 00:08:41.997 Number of Firmware Slots: N/A 00:08:41.997 Firmware Slot 1 Read-Only: N/A 00:08:41.997 Firmware Activation Without Reset: N/A 00:08:41.997 Multiple Update Detection Support: N/A 00:08:41.997 Firmware Update Granularity: No Information Provided 00:08:41.997 Per-Namespace SMART Log: Yes 00:08:41.997 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.997 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:41.997 Command Effects Log Page: Supported 00:08:41.997 Get Log Page Extended Data: Supported 00:08:41.997 Telemetry Log Pages: Not Supported 00:08:41.997 Persistent Event Log Pages: Not Supported 00:08:41.997 Supported Log Pages Log Page: May Support 00:08:41.997 Commands Supported & Effects Log Page: Not Supported 00:08:41.997 Feature Identifiers & Effects Log Page:May Support 00:08:41.997 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.997 Data Area 4 for Telemetry Log: Not Supported 00:08:41.997 Error Log Page Entries Supported: 1 00:08:41.997 Keep Alive: Not Supported 00:08:41.997 00:08:41.997 NVM Command Set Attributes 00:08:41.997 ========================== 00:08:41.997 Submission Queue Entry Size 00:08:41.997 Max: 64 00:08:41.997 Min: 64 00:08:41.997 Completion Queue Entry Size 00:08:41.997 Max: 16 00:08:41.997 Min: 16 00:08:41.997 Number of Namespaces: 256 00:08:41.997 Compare Command: Supported 00:08:41.997 Write Uncorrectable Command: Not Supported 00:08:41.997 Dataset Management Command: Supported 00:08:41.997 Write Zeroes Command: Supported 00:08:41.997 Set Features Save Field: Supported 00:08:41.997 Reservations: Not Supported 00:08:41.997 Timestamp: Supported 00:08:41.998 Copy: Supported 00:08:41.998 Volatile Write Cache: Present 00:08:41.998 Atomic Write Unit (Normal): 1 00:08:41.998 Atomic Write Unit (PFail): 1 00:08:41.998 Atomic Compare & Write Unit: 1 00:08:41.998 Fused Compare & Write: Not Supported 00:08:41.998 Scatter-Gather List 00:08:41.998 SGL Command Set: Supported 00:08:41.998 SGL Keyed: Not Supported 00:08:41.998 SGL Bit Bucket Descriptor: Not Supported 00:08:41.998 SGL Metadata Pointer: Not Supported 00:08:41.998 Oversized SGL: Not Supported 00:08:41.998 SGL Metadata Address: Not Supported 00:08:41.998 SGL Offset: Not Supported 00:08:41.998 Transport SGL Data Block: Not Supported 00:08:41.998 Replay Protected Memory Block: Not Supported 00:08:41.998 00:08:41.998 Firmware Slot Information 00:08:41.998 ========================= 00:08:41.998 Active slot: 1 00:08:41.998 Slot 1 Firmware Revision: 1.0 00:08:41.998 00:08:41.998 00:08:41.998 Commands Supported and Effects 00:08:41.998 ============================== 00:08:41.998 Admin Commands 00:08:41.998 -------------- 00:08:41.998 Delete I/O Submission Queue (00h): Supported 00:08:41.998 Create I/O Submission Queue (01h): Supported 00:08:41.998 Get Log Page (02h): Supported 00:08:41.998 Delete I/O Completion Queue (04h): Supported 00:08:41.998 Create I/O Completion Queue (05h): Supported 00:08:41.998 Identify (06h): Supported 00:08:41.998 Abort (08h): Supported 00:08:41.998 Set Features (09h): Supported 00:08:41.998 Get Features (0Ah): Supported 00:08:41.998 Asynchronous Event Request (0Ch): Supported 00:08:41.998 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.998 Directive Send (19h): Supported 00:08:41.998 Directive Receive (1Ah): Supported 00:08:41.998 Virtualization Management (1Ch): Supported 00:08:41.998 Doorbell Buffer Config (7Ch): Supported 00:08:41.998 Format NVM (80h): Supported LBA-Change 00:08:41.998 I/O Commands 00:08:41.998 ------------ 00:08:41.998 Flush (00h): Supported LBA-Change 00:08:41.998 Write (01h): Supported LBA-Change 00:08:41.998 Read (02h): Supported 00:08:41.998 Compare (05h): Supported 00:08:41.998 Write Zeroes (08h): Supported LBA-Change 00:08:41.998 Dataset Management (09h): Supported LBA-Change 00:08:41.998 Unknown (0Ch): Supported 00:08:41.998 Unknown (12h): Supported 00:08:41.998 Copy (19h): Supported LBA-Change 00:08:41.998 Unknown (1Dh): Supported LBA-Change 00:08:41.998 00:08:41.998 Error Log 00:08:41.998 ========= 00:08:41.998 00:08:41.998 Arbitration 00:08:41.998 =========== 00:08:41.998 Arbitration Burst: no limit 00:08:41.998 00:08:41.998 Power Management 00:08:41.998 ================ 00:08:41.998 Number of Power States: 1 00:08:41.998 Current Power State: Power State #0 00:08:41.998 Power State #0: 00:08:41.998 Max Power: 25.00 W 00:08:41.998 Non-Operational State: Operational 00:08:41.998 Entry Latency: 16 microseconds 00:08:41.998 Exit Latency: 4 microseconds 00:08:41.998 Relative Read Throughput: 0 00:08:41.998 Relative Read Latency: 0 00:08:41.998 Relative Write Throughput: 0 00:08:41.998 Relative Write Latency: 0 00:08:41.998 Idle Power: Not Reported 00:08:41.998 Active Power: Not Reported 00:08:41.998 Non-Operational Permissive Mode: Not Supported 00:08:41.998 00:08:41.998 Health Information 00:08:41.998 ================== 00:08:41.998 Critical Warnings: 00:08:41.998 Available Spare Space: OK 00:08:41.998 Temperature: OK 00:08:41.998 Device Reliability: OK 00:08:41.998 Read Only: No 00:08:41.998 Volatile Memory Backup: OK 00:08:41.998 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.998 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.998 Available Spare: 0% 00:08:41.998 Available Spare Threshold: 0% 00:08:41.998 Life Percentage Used: 0% 00:08:41.998 Data Units Read: 1114 00:08:41.998 Data Units Written: 511 00:08:41.998 Host Read Commands: 51782 00:08:41.998 Host Write Commands: 25329 00:08:41.998 Controller Busy Time: 0 minutes 00:08:41.998 Power Cycles: 0 00:08:41.998 Power On Hours: 0 hours 00:08:41.998 Unsafe Shutdowns: 0 00:08:41.998 Unrecoverable Media Errors: 0 00:08:41.998 Lifetime Error Log Entries: 0 00:08:41.998 Warning Temperature Time: 0 minutes 00:08:41.998 Critical Temperature Time: 0 minutes 00:08:41.998 00:08:41.998 Number of Queues 00:08:41.998 ================ 00:08:41.998 Number of I/O Submission Queues: 64 00:08:41.998 Number of I/O Completion Queues: 64 00:08:41.998 00:08:41.998 ZNS Specific Controller Data 00:08:41.998 ============================ 00:08:41.998 Zone Append Size Limit: 0 00:08:41.998 00:08:41.998 00:08:41.998 Active Namespaces 00:08:41.998 ================= 00:08:41.998 Namespace ID:1 00:08:41.998 Error Recovery Timeout: Unlimited 00:08:41.998 Command Set Identifier: NVM (00h) 00:08:41.998 Deallocate: Supported 00:08:41.998 Deallocated/Unwritten Error: Supported 00:08:41.998 Deallocated Read Value: All 0x00 00:08:41.998 Deallocate in Write Zeroes: Not Supported 00:08:41.998 Deallocated Guard Field: 0xFFFF 00:08:41.998 Flush: Supported 00:08:41.998 Reservation: Not Supported 00:08:41.998 Namespace Sharing Capabilities: Private 00:08:41.998 Size (in LBAs): 1310720 (5GiB) 00:08:41.998 Capacity (in LBAs): 1310720 (5GiB) 00:08:41.998 Utilization (in LBAs): 1310720 (5GiB) 00:08:41.998 Thin Provisioning: Not Supported 00:08:41.998 Per-NS Atomic Units: No 00:08:41.998 Maximum Single Source Range Length: 128 00:08:41.998 Maximum Copy Length: 128 00:08:41.998 Maximum Source Range Count: 128 00:08:41.998 NGUID/EUI64 Never Reused: No 00:08:41.998 Namespace Write Protected: No 00:08:41.998 Number of LBA Formats: 8 00:08:41.998 Current LBA Format: LBA Format #04 00:08:41.998 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:41.998 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:41.998 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:41.998 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:41.998 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:41.998 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:41.998 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:41.998 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:41.998 00:08:41.998 04:04:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:41.998 04:04:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:08:41.998 ===================================================== 00:08:41.998 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:41.998 ===================================================== 00:08:41.998 Controller Capabilities/Features 00:08:41.998 ================================ 00:08:41.998 Vendor ID: 1b36 00:08:41.998 Subsystem Vendor ID: 1af4 00:08:41.998 Serial Number: 12342 00:08:41.998 Model Number: QEMU NVMe Ctrl 00:08:41.998 Firmware Version: 8.0.0 00:08:41.998 Recommended Arb Burst: 6 00:08:41.998 IEEE OUI Identifier: 00 54 52 00:08:41.998 Multi-path I/O 00:08:41.998 May have multiple subsystem ports: No 00:08:41.998 May have multiple controllers: No 00:08:41.998 Associated with SR-IOV VF: No 00:08:41.998 Max Data Transfer Size: 524288 00:08:41.998 Max Number of Namespaces: 256 00:08:41.998 Max Number of I/O Queues: 64 00:08:41.998 NVMe Specification Version (VS): 1.4 00:08:41.998 NVMe Specification Version (Identify): 1.4 00:08:41.998 Maximum Queue Entries: 2048 00:08:41.998 Contiguous Queues Required: Yes 00:08:41.998 Arbitration Mechanisms Supported 00:08:41.998 Weighted Round Robin: Not Supported 00:08:41.998 Vendor Specific: Not Supported 00:08:41.998 Reset Timeout: 7500 ms 00:08:41.998 Doorbell Stride: 4 bytes 00:08:41.998 NVM Subsystem Reset: Not Supported 00:08:41.998 Command Sets Supported 00:08:41.998 NVM Command Set: Supported 00:08:41.998 Boot Partition: Not Supported 00:08:41.998 Memory Page Size Minimum: 4096 bytes 00:08:41.998 Memory Page Size Maximum: 65536 bytes 00:08:41.998 Persistent Memory Region: Not Supported 00:08:41.998 Optional Asynchronous Events Supported 00:08:41.998 Namespace Attribute Notices: Supported 00:08:41.998 Firmware Activation Notices: Not Supported 00:08:41.998 ANA Change Notices: Not Supported 00:08:41.998 PLE Aggregate Log Change Notices: Not Supported 00:08:41.998 LBA Status Info Alert Notices: Not Supported 00:08:41.998 EGE Aggregate Log Change Notices: Not Supported 00:08:41.998 Normal NVM Subsystem Shutdown event: Not Supported 00:08:41.998 Zone Descriptor Change Notices: Not Supported 00:08:41.998 Discovery Log Change Notices: Not Supported 00:08:41.998 Controller Attributes 00:08:41.998 128-bit Host Identifier: Not Supported 00:08:41.998 Non-Operational Permissive Mode: Not Supported 00:08:41.998 NVM Sets: Not Supported 00:08:41.998 Read Recovery Levels: Not Supported 00:08:41.999 Endurance Groups: Not Supported 00:08:41.999 Predictable Latency Mode: Not Supported 00:08:41.999 Traffic Based Keep ALive: Not Supported 00:08:41.999 Namespace Granularity: Not Supported 00:08:41.999 SQ Associations: Not Supported 00:08:41.999 UUID List: Not Supported 00:08:41.999 Multi-Domain Subsystem: Not Supported 00:08:41.999 Fixed Capacity Management: Not Supported 00:08:41.999 Variable Capacity Management: Not Supported 00:08:41.999 Delete Endurance Group: Not Supported 00:08:41.999 Delete NVM Set: Not Supported 00:08:41.999 Extended LBA Formats Supported: Supported 00:08:41.999 Flexible Data Placement Supported: Not Supported 00:08:41.999 00:08:41.999 Controller Memory Buffer Support 00:08:41.999 ================================ 00:08:41.999 Supported: No 00:08:41.999 00:08:41.999 Persistent Memory Region Support 00:08:41.999 ================================ 00:08:41.999 Supported: No 00:08:41.999 00:08:41.999 Admin Command Set Attributes 00:08:41.999 ============================ 00:08:41.999 Security Send/Receive: Not Supported 00:08:41.999 Format NVM: Supported 00:08:41.999 Firmware Activate/Download: Not Supported 00:08:41.999 Namespace Management: Supported 00:08:41.999 Device Self-Test: Not Supported 00:08:41.999 Directives: Supported 00:08:41.999 NVMe-MI: Not Supported 00:08:41.999 Virtualization Management: Not Supported 00:08:41.999 Doorbell Buffer Config: Supported 00:08:41.999 Get LBA Status Capability: Not Supported 00:08:41.999 Command & Feature Lockdown Capability: Not Supported 00:08:41.999 Abort Command Limit: 4 00:08:41.999 Async Event Request Limit: 4 00:08:41.999 Number of Firmware Slots: N/A 00:08:41.999 Firmware Slot 1 Read-Only: N/A 00:08:41.999 Firmware Activation Without Reset: N/A 00:08:41.999 Multiple Update Detection Support: N/A 00:08:41.999 Firmware Update Granularity: No Information Provided 00:08:41.999 Per-Namespace SMART Log: Yes 00:08:41.999 Asymmetric Namespace Access Log Page: Not Supported 00:08:41.999 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:41.999 Command Effects Log Page: Supported 00:08:41.999 Get Log Page Extended Data: Supported 00:08:41.999 Telemetry Log Pages: Not Supported 00:08:41.999 Persistent Event Log Pages: Not Supported 00:08:41.999 Supported Log Pages Log Page: May Support 00:08:41.999 Commands Supported & Effects Log Page: Not Supported 00:08:41.999 Feature Identifiers & Effects Log Page:May Support 00:08:41.999 NVMe-MI Commands & Effects Log Page: May Support 00:08:41.999 Data Area 4 for Telemetry Log: Not Supported 00:08:41.999 Error Log Page Entries Supported: 1 00:08:41.999 Keep Alive: Not Supported 00:08:41.999 00:08:41.999 NVM Command Set Attributes 00:08:41.999 ========================== 00:08:41.999 Submission Queue Entry Size 00:08:41.999 Max: 64 00:08:41.999 Min: 64 00:08:41.999 Completion Queue Entry Size 00:08:41.999 Max: 16 00:08:41.999 Min: 16 00:08:41.999 Number of Namespaces: 256 00:08:41.999 Compare Command: Supported 00:08:41.999 Write Uncorrectable Command: Not Supported 00:08:41.999 Dataset Management Command: Supported 00:08:41.999 Write Zeroes Command: Supported 00:08:41.999 Set Features Save Field: Supported 00:08:41.999 Reservations: Not Supported 00:08:41.999 Timestamp: Supported 00:08:41.999 Copy: Supported 00:08:41.999 Volatile Write Cache: Present 00:08:41.999 Atomic Write Unit (Normal): 1 00:08:41.999 Atomic Write Unit (PFail): 1 00:08:41.999 Atomic Compare & Write Unit: 1 00:08:41.999 Fused Compare & Write: Not Supported 00:08:41.999 Scatter-Gather List 00:08:41.999 SGL Command Set: Supported 00:08:41.999 SGL Keyed: Not Supported 00:08:41.999 SGL Bit Bucket Descriptor: Not Supported 00:08:41.999 SGL Metadata Pointer: Not Supported 00:08:41.999 Oversized SGL: Not Supported 00:08:41.999 SGL Metadata Address: Not Supported 00:08:41.999 SGL Offset: Not Supported 00:08:41.999 Transport SGL Data Block: Not Supported 00:08:41.999 Replay Protected Memory Block: Not Supported 00:08:41.999 00:08:41.999 Firmware Slot Information 00:08:41.999 ========================= 00:08:41.999 Active slot: 1 00:08:41.999 Slot 1 Firmware Revision: 1.0 00:08:41.999 00:08:41.999 00:08:41.999 Commands Supported and Effects 00:08:41.999 ============================== 00:08:41.999 Admin Commands 00:08:41.999 -------------- 00:08:41.999 Delete I/O Submission Queue (00h): Supported 00:08:41.999 Create I/O Submission Queue (01h): Supported 00:08:41.999 Get Log Page (02h): Supported 00:08:41.999 Delete I/O Completion Queue (04h): Supported 00:08:41.999 Create I/O Completion Queue (05h): Supported 00:08:41.999 Identify (06h): Supported 00:08:41.999 Abort (08h): Supported 00:08:41.999 Set Features (09h): Supported 00:08:41.999 Get Features (0Ah): Supported 00:08:41.999 Asynchronous Event Request (0Ch): Supported 00:08:41.999 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:41.999 Directive Send (19h): Supported 00:08:41.999 Directive Receive (1Ah): Supported 00:08:41.999 Virtualization Management (1Ch): Supported 00:08:41.999 Doorbell Buffer Config (7Ch): Supported 00:08:41.999 Format NVM (80h): Supported LBA-Change 00:08:41.999 I/O Commands 00:08:41.999 ------------ 00:08:41.999 Flush (00h): Supported LBA-Change 00:08:41.999 Write (01h): Supported LBA-Change 00:08:41.999 Read (02h): Supported 00:08:41.999 Compare (05h): Supported 00:08:41.999 Write Zeroes (08h): Supported LBA-Change 00:08:41.999 Dataset Management (09h): Supported LBA-Change 00:08:41.999 Unknown (0Ch): Supported 00:08:41.999 Unknown (12h): Supported 00:08:41.999 Copy (19h): Supported LBA-Change 00:08:41.999 Unknown (1Dh): Supported LBA-Change 00:08:41.999 00:08:41.999 Error Log 00:08:41.999 ========= 00:08:41.999 00:08:41.999 Arbitration 00:08:41.999 =========== 00:08:41.999 Arbitration Burst: no limit 00:08:41.999 00:08:41.999 Power Management 00:08:41.999 ================ 00:08:41.999 Number of Power States: 1 00:08:41.999 Current Power State: Power State #0 00:08:41.999 Power State #0: 00:08:41.999 Max Power: 25.00 W 00:08:41.999 Non-Operational State: Operational 00:08:41.999 Entry Latency: 16 microseconds 00:08:41.999 Exit Latency: 4 microseconds 00:08:41.999 Relative Read Throughput: 0 00:08:41.999 Relative Read Latency: 0 00:08:41.999 Relative Write Throughput: 0 00:08:41.999 Relative Write Latency: 0 00:08:41.999 Idle Power: Not Reported 00:08:41.999 Active Power: Not Reported 00:08:41.999 Non-Operational Permissive Mode: Not Supported 00:08:41.999 00:08:41.999 Health Information 00:08:41.999 ================== 00:08:41.999 Critical Warnings: 00:08:41.999 Available Spare Space: OK 00:08:41.999 Temperature: OK 00:08:41.999 Device Reliability: OK 00:08:41.999 Read Only: No 00:08:41.999 Volatile Memory Backup: OK 00:08:41.999 Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.999 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:41.999 Available Spare: 0% 00:08:41.999 Available Spare Threshold: 0% 00:08:41.999 Life Percentage Used: 0% 00:08:41.999 Data Units Read: 3428 00:08:41.999 Data Units Written: 1571 00:08:41.999 Host Read Commands: 156829 00:08:41.999 Host Write Commands: 76578 00:08:41.999 Controller Busy Time: 0 minutes 00:08:41.999 Power Cycles: 0 00:08:41.999 Power On Hours: 0 hours 00:08:41.999 Unsafe Shutdowns: 0 00:08:41.999 Unrecoverable Media Errors: 0 00:08:41.999 Lifetime Error Log Entries: 0 00:08:41.999 Warning Temperature Time: 0 minutes 00:08:41.999 Critical Temperature Time: 0 minutes 00:08:41.999 00:08:41.999 Number of Queues 00:08:41.999 ================ 00:08:41.999 Number of I/O Submission Queues: 64 00:08:41.999 Number of I/O Completion Queues: 64 00:08:41.999 00:08:41.999 ZNS Specific Controller Data 00:08:41.999 ============================ 00:08:41.999 Zone Append Size Limit: 0 00:08:41.999 00:08:41.999 00:08:41.999 Active Namespaces 00:08:41.999 ================= 00:08:41.999 Namespace ID:1 00:08:41.999 Error Recovery Timeout: Unlimited 00:08:41.999 Command Set Identifier: NVM (00h) 00:08:41.999 Deallocate: Supported 00:08:41.999 Deallocated/Unwritten Error: Supported 00:08:41.999 Deallocated Read Value: All 0x00 00:08:41.999 Deallocate in Write Zeroes: Not Supported 00:08:41.999 Deallocated Guard Field: 0xFFFF 00:08:41.999 Flush: Supported 00:08:41.999 Reservation: Not Supported 00:08:41.999 Namespace Sharing Capabilities: Private 00:08:41.999 Size (in LBAs): 1048576 (4GiB) 00:08:41.999 Capacity (in LBAs): 1048576 (4GiB) 00:08:41.999 Utilization (in LBAs): 1048576 (4GiB) 00:08:41.999 Thin Provisioning: Not Supported 00:08:41.999 Per-NS Atomic Units: No 00:08:41.999 Maximum Single Source Range Length: 128 00:08:41.999 Maximum Copy Length: 128 00:08:41.999 Maximum Source Range Count: 128 00:08:42.000 NGUID/EUI64 Never Reused: No 00:08:42.000 Namespace Write Protected: No 00:08:42.000 Number of LBA Formats: 8 00:08:42.000 Current LBA Format: LBA Format #04 00:08:42.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:42.000 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:42.000 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:42.000 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:42.000 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:42.000 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:42.000 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:42.000 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:42.000 00:08:42.000 Namespace ID:2 00:08:42.000 Error Recovery Timeout: Unlimited 00:08:42.000 Command Set Identifier: NVM (00h) 00:08:42.000 Deallocate: Supported 00:08:42.000 Deallocated/Unwritten Error: Supported 00:08:42.000 Deallocated Read Value: All 0x00 00:08:42.000 Deallocate in Write Zeroes: Not Supported 00:08:42.000 Deallocated Guard Field: 0xFFFF 00:08:42.000 Flush: Supported 00:08:42.000 Reservation: Not Supported 00:08:42.000 Namespace Sharing Capabilities: Private 00:08:42.000 Size (in LBAs): 1048576 (4GiB) 00:08:42.000 Capacity (in LBAs): 1048576 (4GiB) 00:08:42.000 Utilization (in LBAs): 1048576 (4GiB) 00:08:42.000 Thin Provisioning: Not Supported 00:08:42.000 Per-NS Atomic Units: No 00:08:42.000 Maximum Single Source Range Length: 128 00:08:42.000 Maximum Copy Length: 128 00:08:42.000 Maximum Source Range Count: 128 00:08:42.000 NGUID/EUI64 Never Reused: No 00:08:42.000 Namespace Write Protected: No 00:08:42.000 Number of LBA Formats: 8 00:08:42.000 Current LBA Format: LBA Format #04 00:08:42.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:42.000 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:42.000 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:42.000 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:42.000 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:42.000 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:42.000 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:42.000 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:42.000 00:08:42.000 Namespace ID:3 00:08:42.000 Error Recovery Timeout: Unlimited 00:08:42.000 Command Set Identifier: NVM (00h) 00:08:42.000 Deallocate: Supported 00:08:42.000 Deallocated/Unwritten Error: Supported 00:08:42.000 Deallocated Read Value: All 0x00 00:08:42.000 Deallocate in Write Zeroes: Not Supported 00:08:42.000 Deallocated Guard Field: 0xFFFF 00:08:42.000 Flush: Supported 00:08:42.000 Reservation: Not Supported 00:08:42.000 Namespace Sharing Capabilities: Private 00:08:42.000 Size (in LBAs): 1048576 (4GiB) 00:08:42.000 Capacity (in LBAs): 1048576 (4GiB) 00:08:42.000 Utilization (in LBAs): 1048576 (4GiB) 00:08:42.000 Thin Provisioning: Not Supported 00:08:42.000 Per-NS Atomic Units: No 00:08:42.000 Maximum Single Source Range Length: 128 00:08:42.000 Maximum Copy Length: 128 00:08:42.000 Maximum Source Range Count: 128 00:08:42.000 NGUID/EUI64 Never Reused: No 00:08:42.000 Namespace Write Protected: No 00:08:42.000 Number of LBA Formats: 8 00:08:42.000 Current LBA Format: LBA Format #04 00:08:42.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:42.000 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:42.000 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:42.000 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:42.000 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:42.000 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:42.000 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:42.000 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:42.000 00:08:42.262 04:04:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:42.262 04:04:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:08:42.262 ===================================================== 00:08:42.262 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:42.262 ===================================================== 00:08:42.262 Controller Capabilities/Features 00:08:42.263 ================================ 00:08:42.263 Vendor ID: 1b36 00:08:42.263 Subsystem Vendor ID: 1af4 00:08:42.263 Serial Number: 12343 00:08:42.263 Model Number: QEMU NVMe Ctrl 00:08:42.263 Firmware Version: 8.0.0 00:08:42.263 Recommended Arb Burst: 6 00:08:42.263 IEEE OUI Identifier: 00 54 52 00:08:42.263 Multi-path I/O 00:08:42.263 May have multiple subsystem ports: No 00:08:42.263 May have multiple controllers: Yes 00:08:42.263 Associated with SR-IOV VF: No 00:08:42.263 Max Data Transfer Size: 524288 00:08:42.263 Max Number of Namespaces: 256 00:08:42.263 Max Number of I/O Queues: 64 00:08:42.263 NVMe Specification Version (VS): 1.4 00:08:42.263 NVMe Specification Version (Identify): 1.4 00:08:42.263 Maximum Queue Entries: 2048 00:08:42.263 Contiguous Queues Required: Yes 00:08:42.263 Arbitration Mechanisms Supported 00:08:42.263 Weighted Round Robin: Not Supported 00:08:42.263 Vendor Specific: Not Supported 00:08:42.263 Reset Timeout: 7500 ms 00:08:42.263 Doorbell Stride: 4 bytes 00:08:42.263 NVM Subsystem Reset: Not Supported 00:08:42.263 Command Sets Supported 00:08:42.263 NVM Command Set: Supported 00:08:42.263 Boot Partition: Not Supported 00:08:42.263 Memory Page Size Minimum: 4096 bytes 00:08:42.263 Memory Page Size Maximum: 65536 bytes 00:08:42.263 Persistent Memory Region: Not Supported 00:08:42.263 Optional Asynchronous Events Supported 00:08:42.263 Namespace Attribute Notices: Supported 00:08:42.263 Firmware Activation Notices: Not Supported 00:08:42.263 ANA Change Notices: Not Supported 00:08:42.263 PLE Aggregate Log Change Notices: Not Supported 00:08:42.263 LBA Status Info Alert Notices: Not Supported 00:08:42.263 EGE Aggregate Log Change Notices: Not Supported 00:08:42.263 Normal NVM Subsystem Shutdown event: Not Supported 00:08:42.263 Zone Descriptor Change Notices: Not Supported 00:08:42.263 Discovery Log Change Notices: Not Supported 00:08:42.263 Controller Attributes 00:08:42.263 128-bit Host Identifier: Not Supported 00:08:42.263 Non-Operational Permissive Mode: Not Supported 00:08:42.263 NVM Sets: Not Supported 00:08:42.263 Read Recovery Levels: Not Supported 00:08:42.263 Endurance Groups: Supported 00:08:42.263 Predictable Latency Mode: Not Supported 00:08:42.263 Traffic Based Keep ALive: Not Supported 00:08:42.263 Namespace Granularity: Not Supported 00:08:42.263 SQ Associations: Not Supported 00:08:42.263 UUID List: Not Supported 00:08:42.263 Multi-Domain Subsystem: Not Supported 00:08:42.263 Fixed Capacity Management: Not Supported 00:08:42.263 Variable Capacity Management: Not Supported 00:08:42.263 Delete Endurance Group: Not Supported 00:08:42.263 Delete NVM Set: Not Supported 00:08:42.263 Extended LBA Formats Supported: Supported 00:08:42.263 Flexible Data Placement Supported: Supported 00:08:42.263 00:08:42.263 Controller Memory Buffer Support 00:08:42.263 ================================ 00:08:42.263 Supported: No 00:08:42.263 00:08:42.263 Persistent Memory Region Support 00:08:42.263 ================================ 00:08:42.263 Supported: No 00:08:42.263 00:08:42.263 Admin Command Set Attributes 00:08:42.263 ============================ 00:08:42.263 Security Send/Receive: Not Supported 00:08:42.263 Format NVM: Supported 00:08:42.263 Firmware Activate/Download: Not Supported 00:08:42.263 Namespace Management: Supported 00:08:42.263 Device Self-Test: Not Supported 00:08:42.263 Directives: Supported 00:08:42.263 NVMe-MI: Not Supported 00:08:42.263 Virtualization Management: Not Supported 00:08:42.263 Doorbell Buffer Config: Supported 00:08:42.263 Get LBA Status Capability: Not Supported 00:08:42.263 Command & Feature Lockdown Capability: Not Supported 00:08:42.263 Abort Command Limit: 4 00:08:42.263 Async Event Request Limit: 4 00:08:42.263 Number of Firmware Slots: N/A 00:08:42.263 Firmware Slot 1 Read-Only: N/A 00:08:42.263 Firmware Activation Without Reset: N/A 00:08:42.263 Multiple Update Detection Support: N/A 00:08:42.263 Firmware Update Granularity: No Information Provided 00:08:42.263 Per-Namespace SMART Log: Yes 00:08:42.263 Asymmetric Namespace Access Log Page: Not Supported 00:08:42.263 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:42.263 Command Effects Log Page: Supported 00:08:42.263 Get Log Page Extended Data: Supported 00:08:42.263 Telemetry Log Pages: Not Supported 00:08:42.263 Persistent Event Log Pages: Not Supported 00:08:42.263 Supported Log Pages Log Page: May Support 00:08:42.263 Commands Supported & Effects Log Page: Not Supported 00:08:42.263 Feature Identifiers & Effects Log Page:May Support 00:08:42.263 NVMe-MI Commands & Effects Log Page: May Support 00:08:42.263 Data Area 4 for Telemetry Log: Not Supported 00:08:42.263 Error Log Page Entries Supported: 1 00:08:42.263 Keep Alive: Not Supported 00:08:42.263 00:08:42.263 NVM Command Set Attributes 00:08:42.263 ========================== 00:08:42.263 Submission Queue Entry Size 00:08:42.263 Max: 64 00:08:42.263 Min: 64 00:08:42.263 Completion Queue Entry Size 00:08:42.263 Max: 16 00:08:42.263 Min: 16 00:08:42.263 Number of Namespaces: 256 00:08:42.263 Compare Command: Supported 00:08:42.263 Write Uncorrectable Command: Not Supported 00:08:42.263 Dataset Management Command: Supported 00:08:42.263 Write Zeroes Command: Supported 00:08:42.263 Set Features Save Field: Supported 00:08:42.263 Reservations: Not Supported 00:08:42.263 Timestamp: Supported 00:08:42.263 Copy: Supported 00:08:42.263 Volatile Write Cache: Present 00:08:42.263 Atomic Write Unit (Normal): 1 00:08:42.263 Atomic Write Unit (PFail): 1 00:08:42.263 Atomic Compare & Write Unit: 1 00:08:42.263 Fused Compare & Write: Not Supported 00:08:42.263 Scatter-Gather List 00:08:42.263 SGL Command Set: Supported 00:08:42.263 SGL Keyed: Not Supported 00:08:42.263 SGL Bit Bucket Descriptor: Not Supported 00:08:42.263 SGL Metadata Pointer: Not Supported 00:08:42.263 Oversized SGL: Not Supported 00:08:42.263 SGL Metadata Address: Not Supported 00:08:42.263 SGL Offset: Not Supported 00:08:42.263 Transport SGL Data Block: Not Supported 00:08:42.263 Replay Protected Memory Block: Not Supported 00:08:42.263 00:08:42.263 Firmware Slot Information 00:08:42.263 ========================= 00:08:42.263 Active slot: 1 00:08:42.263 Slot 1 Firmware Revision: 1.0 00:08:42.263 00:08:42.263 00:08:42.263 Commands Supported and Effects 00:08:42.263 ============================== 00:08:42.263 Admin Commands 00:08:42.263 -------------- 00:08:42.263 Delete I/O Submission Queue (00h): Supported 00:08:42.263 Create I/O Submission Queue (01h): Supported 00:08:42.263 Get Log Page (02h): Supported 00:08:42.263 Delete I/O Completion Queue (04h): Supported 00:08:42.263 Create I/O Completion Queue (05h): Supported 00:08:42.263 Identify (06h): Supported 00:08:42.263 Abort (08h): Supported 00:08:42.263 Set Features (09h): Supported 00:08:42.263 Get Features (0Ah): Supported 00:08:42.263 Asynchronous Event Request (0Ch): Supported 00:08:42.263 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:42.263 Directive Send (19h): Supported 00:08:42.263 Directive Receive (1Ah): Supported 00:08:42.263 Virtualization Management (1Ch): Supported 00:08:42.263 Doorbell Buffer Config (7Ch): Supported 00:08:42.263 Format NVM (80h): Supported LBA-Change 00:08:42.263 I/O Commands 00:08:42.263 ------------ 00:08:42.263 Flush (00h): Supported LBA-Change 00:08:42.263 Write (01h): Supported LBA-Change 00:08:42.263 Read (02h): Supported 00:08:42.263 Compare (05h): Supported 00:08:42.263 Write Zeroes (08h): Supported LBA-Change 00:08:42.263 Dataset Management (09h): Supported LBA-Change 00:08:42.263 Unknown (0Ch): Supported 00:08:42.263 Unknown (12h): Supported 00:08:42.263 Copy (19h): Supported LBA-Change 00:08:42.263 Unknown (1Dh): Supported LBA-Change 00:08:42.263 00:08:42.263 Error Log 00:08:42.263 ========= 00:08:42.263 00:08:42.263 Arbitration 00:08:42.263 =========== 00:08:42.263 Arbitration Burst: no limit 00:08:42.263 00:08:42.263 Power Management 00:08:42.263 ================ 00:08:42.263 Number of Power States: 1 00:08:42.263 Current Power State: Power State #0 00:08:42.263 Power State #0: 00:08:42.263 Max Power: 25.00 W 00:08:42.263 Non-Operational State: Operational 00:08:42.263 Entry Latency: 16 microseconds 00:08:42.263 Exit Latency: 4 microseconds 00:08:42.263 Relative Read Throughput: 0 00:08:42.264 Relative Read Latency: 0 00:08:42.264 Relative Write Throughput: 0 00:08:42.264 Relative Write Latency: 0 00:08:42.264 Idle Power: Not Reported 00:08:42.264 Active Power: Not Reported 00:08:42.264 Non-Operational Permissive Mode: Not Supported 00:08:42.264 00:08:42.264 Health Information 00:08:42.264 ================== 00:08:42.264 Critical Warnings: 00:08:42.264 Available Spare Space: OK 00:08:42.264 Temperature: OK 00:08:42.264 Device Reliability: OK 00:08:42.264 Read Only: No 00:08:42.264 Volatile Memory Backup: OK 00:08:42.264 Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.264 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:42.264 Available Spare: 0% 00:08:42.264 Available Spare Threshold: 0% 00:08:42.264 Life Percentage Used: 0% 00:08:42.264 Data Units Read: 1190 00:08:42.264 Data Units Written: 548 00:08:42.264 Host Read Commands: 52646 00:08:42.264 Host Write Commands: 25754 00:08:42.264 Controller Busy Time: 0 minutes 00:08:42.264 Power Cycles: 0 00:08:42.264 Power On Hours: 0 hours 00:08:42.264 Unsafe Shutdowns: 0 00:08:42.264 Unrecoverable Media Errors: 0 00:08:42.264 Lifetime Error Log Entries: 0 00:08:42.264 Warning Temperature Time: 0 minutes 00:08:42.264 Critical Temperature Time: 0 minutes 00:08:42.264 00:08:42.264 Number of Queues 00:08:42.264 ================ 00:08:42.264 Number of I/O Submission Queues: 64 00:08:42.264 Number of I/O Completion Queues: 64 00:08:42.264 00:08:42.264 ZNS Specific Controller Data 00:08:42.264 ============================ 00:08:42.264 Zone Append Size Limit: 0 00:08:42.264 00:08:42.264 00:08:42.264 Active Namespaces 00:08:42.264 ================= 00:08:42.264 Namespace ID:1 00:08:42.264 Error Recovery Timeout: Unlimited 00:08:42.264 Command Set Identifier: NVM (00h) 00:08:42.264 Deallocate: Supported 00:08:42.264 Deallocated/Unwritten Error: Supported 00:08:42.264 Deallocated Read Value: All 0x00 00:08:42.264 Deallocate in Write Zeroes: Not Supported 00:08:42.264 Deallocated Guard Field: 0xFFFF 00:08:42.264 Flush: Supported 00:08:42.264 Reservation: Not Supported 00:08:42.264 Namespace Sharing Capabilities: Multiple Controllers 00:08:42.264 Size (in LBAs): 262144 (1GiB) 00:08:42.264 Capacity (in LBAs): 262144 (1GiB) 00:08:42.264 Utilization (in LBAs): 262144 (1GiB) 00:08:42.264 Thin Provisioning: Not Supported 00:08:42.264 Per-NS Atomic Units: No 00:08:42.264 Maximum Single Source Range Length: 128 00:08:42.264 Maximum Copy Length: 128 00:08:42.264 Maximum Source Range Count: 128 00:08:42.264 NGUID/EUI64 Never Reused: No 00:08:42.264 Namespace Write Protected: No 00:08:42.264 Endurance group ID: 1 00:08:42.264 Number of LBA Formats: 8 00:08:42.264 Current LBA Format: LBA Format #04 00:08:42.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:42.264 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:42.264 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:42.264 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:42.264 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:42.264 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:42.264 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:42.264 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:42.264 00:08:42.264 Get Feature FDP: 00:08:42.264 ================ 00:08:42.264 Enabled: Yes 00:08:42.264 FDP configuration index: 0 00:08:42.264 00:08:42.264 FDP configurations log page 00:08:42.264 =========================== 00:08:42.264 Number of FDP configurations: 1 00:08:42.264 Version: 0 00:08:42.264 Size: 112 00:08:42.264 FDP Configuration Descriptor: 0 00:08:42.264 Descriptor Size: 96 00:08:42.264 Reclaim Group Identifier format: 2 00:08:42.264 FDP Volatile Write Cache: Not Present 00:08:42.264 FDP Configuration: Valid 00:08:42.264 Vendor Specific Size: 0 00:08:42.264 Number of Reclaim Groups: 2 00:08:42.264 Number of Recalim Unit Handles: 8 00:08:42.264 Max Placement Identifiers: 128 00:08:42.264 Number of Namespaces Suppprted: 256 00:08:42.264 Reclaim unit Nominal Size: 6000000 bytes 00:08:42.264 Estimated Reclaim Unit Time Limit: Not Reported 00:08:42.264 RUH Desc #000: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #001: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #002: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #003: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #004: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #005: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #006: RUH Type: Initially Isolated 00:08:42.264 RUH Desc #007: RUH Type: Initially Isolated 00:08:42.264 00:08:42.264 FDP reclaim unit handle usage log page 00:08:42.264 ====================================== 00:08:42.264 Number of Reclaim Unit Handles: 8 00:08:42.264 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:42.264 RUH Usage Desc #001: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #002: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #003: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #004: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #005: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #006: RUH Attributes: Unused 00:08:42.264 RUH Usage Desc #007: RUH Attributes: Unused 00:08:42.264 00:08:42.264 FDP statistics log page 00:08:42.264 ======================= 00:08:42.264 Host bytes with metadata written: 371154944 00:08:42.264 Media bytes with metadata written: 371302400 00:08:42.264 Media bytes erased: 0 00:08:42.264 00:08:42.264 FDP events log page 00:08:42.264 =================== 00:08:42.264 Number of FDP events: 0 00:08:42.264 00:08:42.264 00:08:42.264 real 0m1.022s 00:08:42.264 user 0m0.318s 00:08:42.264 sys 0m0.505s 00:08:42.264 ************************************ 00:08:42.264 END TEST nvme_identify 00:08:42.264 ************************************ 00:08:42.264 04:04:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.264 04:04:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.264 04:04:44 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:42.264 04:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.264 04:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.264 04:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:42.523 ************************************ 00:08:42.523 START TEST nvme_perf 00:08:42.523 ************************************ 00:08:42.523 04:04:44 -- common/autotest_common.sh@1114 -- # nvme_perf 00:08:42.523 04:04:44 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:43.470 Initializing NVMe Controllers 00:08:43.470 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:43.470 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:43.470 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:43.470 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:43.470 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:08:43.470 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:08:43.470 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:08:43.470 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:08:43.470 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:08:43.470 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:08:43.470 Initialization complete. Launching workers. 00:08:43.470 ======================================================== 00:08:43.470 Latency(us) 00:08:43.470 Device Information : IOPS MiB/s Average min max 00:08:43.470 PCIE (0000:00:06.0) NSID 1 from core 0: 12215.02 143.14 10476.76 4939.72 21061.10 00:08:43.470 PCIE (0000:00:07.0) NSID 1 from core 0: 12215.02 143.14 10482.71 4950.69 22230.06 00:08:43.470 PCIE (0000:00:09.0) NSID 1 from core 0: 12215.02 143.14 10479.56 5009.34 24163.33 00:08:43.470 PCIE (0000:00:08.0) NSID 1 from core 0: 12215.02 143.14 10475.68 5092.32 25722.65 00:08:43.470 PCIE (0000:00:08.0) NSID 2 from core 0: 12215.02 143.14 10472.46 5121.04 26233.54 00:08:43.470 PCIE (0000:00:08.0) NSID 3 from core 0: 12215.02 143.14 10469.60 4360.85 27569.54 00:08:43.470 ======================================================== 00:08:43.470 Total : 73290.09 858.87 10476.13 4360.85 27569.54 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5192.468us 00:08:43.470 10.00000% : 5999.065us 00:08:43.470 25.00000% : 8973.391us 00:08:43.470 50.00000% : 10889.058us 00:08:43.470 75.00000% : 12199.778us 00:08:43.470 90.00000% : 14014.622us 00:08:43.470 95.00000% : 15224.517us 00:08:43.470 98.00000% : 16232.763us 00:08:43.470 99.00000% : 17946.782us 00:08:43.470 99.50000% : 19559.975us 00:08:43.470 99.90000% : 20769.871us 00:08:43.470 99.99000% : 21072.345us 00:08:43.470 99.99900% : 21072.345us 00:08:43.470 99.99990% : 21072.345us 00:08:43.470 99.99999% : 21072.345us 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5268.086us 00:08:43.470 10.00000% : 5973.858us 00:08:43.470 25.00000% : 8519.680us 00:08:43.470 50.00000% : 10838.646us 00:08:43.470 75.00000% : 12199.778us 00:08:43.470 90.00000% : 14317.095us 00:08:43.470 95.00000% : 15325.342us 00:08:43.470 98.00000% : 16131.938us 00:08:43.470 99.00000% : 19559.975us 00:08:43.470 99.50000% : 20870.695us 00:08:43.470 99.90000% : 21979.766us 00:08:43.470 99.99000% : 22282.240us 00:08:43.470 99.99900% : 22282.240us 00:08:43.470 99.99990% : 22282.240us 00:08:43.470 99.99999% : 22282.240us 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5293.292us 00:08:43.470 10.00000% : 6024.271us 00:08:43.470 25.00000% : 8318.031us 00:08:43.470 50.00000% : 10939.471us 00:08:43.470 75.00000% : 12250.191us 00:08:43.470 90.00000% : 13913.797us 00:08:43.470 95.00000% : 14922.043us 00:08:43.470 98.00000% : 15930.289us 00:08:43.470 99.00000% : 21475.643us 00:08:43.470 99.50000% : 22887.188us 00:08:43.470 99.90000% : 23996.258us 00:08:43.470 99.99000% : 24197.908us 00:08:43.470 99.99900% : 24197.908us 00:08:43.470 99.99990% : 24197.908us 00:08:43.470 99.99999% : 24197.908us 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5318.498us 00:08:43.470 10.00000% : 5999.065us 00:08:43.470 25.00000% : 8519.680us 00:08:43.470 50.00000% : 11040.295us 00:08:43.470 75.00000% : 12250.191us 00:08:43.470 90.00000% : 13510.498us 00:08:43.470 95.00000% : 14417.920us 00:08:43.470 98.00000% : 15526.991us 00:08:43.470 99.00000% : 22988.012us 00:08:43.470 99.50000% : 24399.557us 00:08:43.470 99.90000% : 25508.628us 00:08:43.470 99.99000% : 25710.277us 00:08:43.470 99.99900% : 25811.102us 00:08:43.470 99.99990% : 25811.102us 00:08:43.470 99.99999% : 25811.102us 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5318.498us 00:08:43.470 10.00000% : 5973.858us 00:08:43.470 25.00000% : 8620.505us 00:08:43.470 50.00000% : 11040.295us 00:08:43.470 75.00000% : 12098.954us 00:08:43.470 90.00000% : 13611.323us 00:08:43.470 95.00000% : 14518.745us 00:08:43.470 98.00000% : 15426.166us 00:08:43.470 99.00000% : 24197.908us 00:08:43.470 99.50000% : 25407.803us 00:08:43.470 99.90000% : 26214.400us 00:08:43.470 99.99000% : 26416.049us 00:08:43.470 99.99900% : 26416.049us 00:08:43.470 99.99990% : 26416.049us 00:08:43.470 99.99999% : 26416.049us 00:08:43.470 00:08:43.470 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:43.470 ================================================================================= 00:08:43.470 1.00000% : 5242.880us 00:08:43.470 10.00000% : 5923.446us 00:08:43.470 25.00000% : 8519.680us 00:08:43.470 50.00000% : 10989.883us 00:08:43.470 75.00000% : 12098.954us 00:08:43.470 90.00000% : 13611.323us 00:08:43.470 95.00000% : 14720.394us 00:08:43.470 98.00000% : 15930.289us 00:08:43.470 99.00000% : 26012.751us 00:08:43.470 99.50000% : 26819.348us 00:08:43.470 99.90000% : 27424.295us 00:08:43.470 99.99000% : 27625.945us 00:08:43.470 99.99900% : 27625.945us 00:08:43.470 99.99990% : 27625.945us 00:08:43.470 99.99999% : 27625.945us 00:08:43.470 00:08:43.470 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:43.470 ============================================================================== 00:08:43.470 Range in us Cumulative IO count 00:08:43.470 4915.200 - 4940.406: 0.0081% ( 1) 00:08:43.470 4940.406 - 4965.612: 0.0244% ( 2) 00:08:43.470 4965.612 - 4990.818: 0.0651% ( 5) 00:08:43.470 4990.818 - 5016.025: 0.1383% ( 9) 00:08:43.470 5016.025 - 5041.231: 0.2441% ( 13) 00:08:43.470 5041.231 - 5066.437: 0.3255% ( 10) 00:08:43.470 5066.437 - 5091.643: 0.3988% ( 9) 00:08:43.470 5091.643 - 5116.849: 0.5208% ( 15) 00:08:43.470 5116.849 - 5142.055: 0.7406% ( 27) 00:08:43.470 5142.055 - 5167.262: 0.9766% ( 29) 00:08:43.470 5167.262 - 5192.468: 1.2451% ( 33) 00:08:43.470 5192.468 - 5217.674: 1.4893% ( 30) 00:08:43.470 5217.674 - 5242.880: 1.7985% ( 38) 00:08:43.470 5242.880 - 5268.086: 2.0508% ( 31) 00:08:43.470 5268.086 - 5293.292: 2.3600% ( 38) 00:08:43.470 5293.292 - 5318.498: 2.5635% ( 25) 00:08:43.470 5318.498 - 5343.705: 2.8727% ( 38) 00:08:43.470 5343.705 - 5368.911: 3.1494% ( 34) 00:08:43.470 5368.911 - 5394.117: 3.4831% ( 41) 00:08:43.470 5394.117 - 5419.323: 3.7598% ( 34) 00:08:43.470 5419.323 - 5444.529: 4.0202% ( 32) 00:08:43.470 5444.529 - 5469.735: 4.3294% ( 38) 00:08:43.471 5469.735 - 5494.942: 4.5817% ( 31) 00:08:43.471 5494.942 - 5520.148: 4.8910% ( 38) 00:08:43.471 5520.148 - 5545.354: 5.1676% ( 34) 00:08:43.471 5545.354 - 5570.560: 5.4362% ( 33) 00:08:43.471 5570.560 - 5595.766: 5.7536% ( 39) 00:08:43.471 5595.766 - 5620.972: 6.0303% ( 34) 00:08:43.471 5620.972 - 5646.178: 6.2581% ( 28) 00:08:43.471 5646.178 - 5671.385: 6.5918% ( 41) 00:08:43.471 5671.385 - 5696.591: 6.8441% ( 31) 00:08:43.471 5696.591 - 5721.797: 7.1208% ( 34) 00:08:43.471 5721.797 - 5747.003: 7.4463% ( 40) 00:08:43.471 5747.003 - 5772.209: 7.7311% ( 35) 00:08:43.471 5772.209 - 5797.415: 8.0078% ( 34) 00:08:43.471 5797.415 - 5822.622: 8.2764% ( 33) 00:08:43.471 5822.622 - 5847.828: 8.5775% ( 37) 00:08:43.471 5847.828 - 5873.034: 8.8379% ( 32) 00:08:43.471 5873.034 - 5898.240: 9.1227% ( 35) 00:08:43.471 5898.240 - 5923.446: 9.4157% ( 36) 00:08:43.471 5923.446 - 5948.652: 9.6598% ( 30) 00:08:43.471 5948.652 - 5973.858: 9.9691% ( 38) 00:08:43.471 5973.858 - 5999.065: 10.2702% ( 37) 00:08:43.471 5999.065 - 6024.271: 10.5713% ( 37) 00:08:43.471 6024.271 - 6049.477: 10.8236% ( 31) 00:08:43.471 6049.477 - 6074.683: 11.1491% ( 40) 00:08:43.471 6074.683 - 6099.889: 11.4095% ( 32) 00:08:43.471 6099.889 - 6125.095: 11.7269% ( 39) 00:08:43.471 6125.095 - 6150.302: 12.0280% ( 37) 00:08:43.471 6150.302 - 6175.508: 12.2884% ( 32) 00:08:43.471 6175.508 - 6200.714: 12.6302% ( 42) 00:08:43.471 6200.714 - 6225.920: 12.9313% ( 37) 00:08:43.471 6225.920 - 6251.126: 13.2406% ( 38) 00:08:43.471 6251.126 - 6276.332: 13.5498% ( 38) 00:08:43.471 6276.332 - 6301.538: 13.8753% ( 40) 00:08:43.471 6301.538 - 6326.745: 14.1683% ( 36) 00:08:43.471 6326.745 - 6351.951: 14.4531% ( 35) 00:08:43.471 6351.951 - 6377.157: 14.7868% ( 41) 00:08:43.471 6377.157 - 6402.363: 15.0716% ( 35) 00:08:43.471 6402.363 - 6427.569: 15.3727% ( 37) 00:08:43.471 6427.569 - 6452.775: 15.7145% ( 42) 00:08:43.471 6452.775 - 6503.188: 16.3249% ( 75) 00:08:43.471 6503.188 - 6553.600: 16.9515% ( 77) 00:08:43.471 6553.600 - 6604.012: 17.5537% ( 74) 00:08:43.471 6604.012 - 6654.425: 18.1315% ( 71) 00:08:43.471 6654.425 - 6704.837: 18.7012% ( 70) 00:08:43.471 6704.837 - 6755.249: 19.3034% ( 74) 00:08:43.471 6755.249 - 6805.662: 19.7998% ( 61) 00:08:43.471 6805.662 - 6856.074: 20.2555% ( 56) 00:08:43.471 6856.074 - 6906.486: 20.6624% ( 50) 00:08:43.471 6906.486 - 6956.898: 21.0205% ( 44) 00:08:43.471 6956.898 - 7007.311: 21.3379% ( 39) 00:08:43.471 7007.311 - 7057.723: 21.5820% ( 30) 00:08:43.471 7057.723 - 7108.135: 21.7611% ( 22) 00:08:43.471 7108.135 - 7158.548: 21.8913% ( 16) 00:08:43.471 7158.548 - 7208.960: 21.9808% ( 11) 00:08:43.471 7208.960 - 7259.372: 22.0540% ( 9) 00:08:43.471 7259.372 - 7309.785: 22.1110% ( 7) 00:08:43.471 7309.785 - 7360.197: 22.1680% ( 7) 00:08:43.471 7360.197 - 7410.609: 22.2168% ( 6) 00:08:43.471 7410.609 - 7461.022: 22.2656% ( 6) 00:08:43.471 7461.022 - 7511.434: 22.3226% ( 7) 00:08:43.471 7511.434 - 7561.846: 22.3633% ( 5) 00:08:43.471 7561.846 - 7612.258: 22.4040% ( 5) 00:08:43.471 7612.258 - 7662.671: 22.5179% ( 14) 00:08:43.471 7662.671 - 7713.083: 22.5911% ( 9) 00:08:43.471 7713.083 - 7763.495: 22.6400% ( 6) 00:08:43.471 7763.495 - 7813.908: 22.7214% ( 10) 00:08:43.471 7813.908 - 7864.320: 22.8271% ( 13) 00:08:43.471 7864.320 - 7914.732: 22.8923% ( 8) 00:08:43.471 7914.732 - 7965.145: 22.9736% ( 10) 00:08:43.471 7965.145 - 8015.557: 23.0632% ( 11) 00:08:43.471 8015.557 - 8065.969: 23.1527% ( 11) 00:08:43.471 8065.969 - 8116.382: 23.2422% ( 11) 00:08:43.471 8116.382 - 8166.794: 23.3317% ( 11) 00:08:43.471 8166.794 - 8217.206: 23.4212% ( 11) 00:08:43.471 8217.206 - 8267.618: 23.5433% ( 15) 00:08:43.471 8267.618 - 8318.031: 23.6165% ( 9) 00:08:43.471 8318.031 - 8368.443: 23.7223% ( 13) 00:08:43.471 8368.443 - 8418.855: 23.8200% ( 12) 00:08:43.471 8418.855 - 8469.268: 23.8770% ( 7) 00:08:43.471 8469.268 - 8519.680: 23.9746% ( 12) 00:08:43.471 8519.680 - 8570.092: 24.0479% ( 9) 00:08:43.471 8570.092 - 8620.505: 24.0885% ( 5) 00:08:43.471 8620.505 - 8670.917: 24.2025% ( 14) 00:08:43.471 8670.917 - 8721.329: 24.3245% ( 15) 00:08:43.471 8721.329 - 8771.742: 24.4548% ( 16) 00:08:43.471 8771.742 - 8822.154: 24.5768% ( 15) 00:08:43.471 8822.154 - 8872.566: 24.8128% ( 29) 00:08:43.471 8872.566 - 8922.978: 24.9674% ( 19) 00:08:43.471 8922.978 - 8973.391: 25.2360% ( 33) 00:08:43.471 8973.391 - 9023.803: 25.4639% ( 28) 00:08:43.471 9023.803 - 9074.215: 25.7568% ( 36) 00:08:43.471 9074.215 - 9124.628: 26.0335% ( 34) 00:08:43.471 9124.628 - 9175.040: 26.2777% ( 30) 00:08:43.471 9175.040 - 9225.452: 26.5625% ( 35) 00:08:43.471 9225.452 - 9275.865: 26.9450% ( 47) 00:08:43.471 9275.865 - 9326.277: 27.2461% ( 37) 00:08:43.471 9326.277 - 9376.689: 27.5553% ( 38) 00:08:43.471 9376.689 - 9427.102: 27.9378% ( 47) 00:08:43.471 9427.102 - 9477.514: 28.3854% ( 55) 00:08:43.471 9477.514 - 9527.926: 28.8167% ( 53) 00:08:43.471 9527.926 - 9578.338: 29.3376% ( 64) 00:08:43.471 9578.338 - 9628.751: 29.8014% ( 57) 00:08:43.471 9628.751 - 9679.163: 30.4850% ( 84) 00:08:43.471 9679.163 - 9729.575: 30.9896% ( 62) 00:08:43.471 9729.575 - 9779.988: 31.5430% ( 68) 00:08:43.471 9779.988 - 9830.400: 32.1208% ( 71) 00:08:43.471 9830.400 - 9880.812: 32.8125% ( 85) 00:08:43.471 9880.812 - 9931.225: 33.3577% ( 67) 00:08:43.471 9931.225 - 9981.637: 34.0902% ( 90) 00:08:43.471 9981.637 - 10032.049: 34.7493% ( 81) 00:08:43.471 10032.049 - 10082.462: 35.5387% ( 97) 00:08:43.471 10082.462 - 10132.874: 36.1898% ( 80) 00:08:43.471 10132.874 - 10183.286: 37.1012% ( 112) 00:08:43.471 10183.286 - 10233.698: 37.9639% ( 106) 00:08:43.471 10233.698 - 10284.111: 38.7533% ( 97) 00:08:43.471 10284.111 - 10334.523: 39.5589% ( 99) 00:08:43.471 10334.523 - 10384.935: 40.5029% ( 116) 00:08:43.471 10384.935 - 10435.348: 41.4307% ( 114) 00:08:43.471 10435.348 - 10485.760: 42.4316% ( 123) 00:08:43.471 10485.760 - 10536.172: 43.2210% ( 97) 00:08:43.471 10536.172 - 10586.585: 44.1732% ( 117) 00:08:43.471 10586.585 - 10636.997: 45.2311% ( 130) 00:08:43.471 10636.997 - 10687.409: 46.3379% ( 136) 00:08:43.471 10687.409 - 10737.822: 47.2493% ( 112) 00:08:43.471 10737.822 - 10788.234: 48.2747% ( 126) 00:08:43.471 10788.234 - 10838.646: 49.3245% ( 129) 00:08:43.471 10838.646 - 10889.058: 50.4720% ( 141) 00:08:43.471 10889.058 - 10939.471: 51.6439% ( 144) 00:08:43.471 10939.471 - 10989.883: 52.5716% ( 114) 00:08:43.471 10989.883 - 11040.295: 53.5970% ( 126) 00:08:43.471 11040.295 - 11090.708: 54.5492% ( 117) 00:08:43.471 11090.708 - 11141.120: 55.5339% ( 121) 00:08:43.471 11141.120 - 11191.532: 56.5104% ( 120) 00:08:43.471 11191.532 - 11241.945: 57.4788% ( 119) 00:08:43.471 11241.945 - 11292.357: 58.5938% ( 137) 00:08:43.471 11292.357 - 11342.769: 59.5947% ( 123) 00:08:43.471 11342.769 - 11393.182: 60.7015% ( 136) 00:08:43.471 11393.182 - 11443.594: 61.6536% ( 117) 00:08:43.471 11443.594 - 11494.006: 62.6383% ( 121) 00:08:43.471 11494.006 - 11544.418: 63.6963% ( 130) 00:08:43.471 11544.418 - 11594.831: 64.6077% ( 112) 00:08:43.471 11594.831 - 11645.243: 65.6820% ( 132) 00:08:43.471 11645.243 - 11695.655: 66.5853% ( 111) 00:08:43.471 11695.655 - 11746.068: 67.4479% ( 106) 00:08:43.471 11746.068 - 11796.480: 68.3024% ( 105) 00:08:43.471 11796.480 - 11846.892: 69.4173% ( 137) 00:08:43.471 11846.892 - 11897.305: 70.3206% ( 111) 00:08:43.471 11897.305 - 11947.717: 71.1670% ( 104) 00:08:43.471 11947.717 - 11998.129: 71.9971% ( 102) 00:08:43.471 11998.129 - 12048.542: 72.7865% ( 97) 00:08:43.471 12048.542 - 12098.954: 73.5596% ( 95) 00:08:43.471 12098.954 - 12149.366: 74.2839% ( 89) 00:08:43.471 12149.366 - 12199.778: 75.1302% ( 104) 00:08:43.471 12199.778 - 12250.191: 75.9521% ( 101) 00:08:43.471 12250.191 - 12300.603: 76.6357% ( 84) 00:08:43.471 12300.603 - 12351.015: 77.3844% ( 92) 00:08:43.471 12351.015 - 12401.428: 77.9704% ( 72) 00:08:43.471 12401.428 - 12451.840: 78.6133% ( 79) 00:08:43.471 12451.840 - 12502.252: 79.2318% ( 76) 00:08:43.471 12502.252 - 12552.665: 79.9316% ( 86) 00:08:43.471 12552.665 - 12603.077: 80.3304% ( 49) 00:08:43.471 12603.077 - 12653.489: 80.8431% ( 63) 00:08:43.471 12653.489 - 12703.902: 81.4372% ( 73) 00:08:43.471 12703.902 - 12754.314: 81.8034% ( 45) 00:08:43.471 12754.314 - 12804.726: 82.1533% ( 43) 00:08:43.471 12804.726 - 12855.138: 82.6335% ( 59) 00:08:43.471 12855.138 - 12905.551: 83.0485% ( 51) 00:08:43.471 12905.551 - 13006.375: 84.1146% ( 131) 00:08:43.471 13006.375 - 13107.200: 84.6761% ( 69) 00:08:43.471 13107.200 - 13208.025: 85.5876% ( 112) 00:08:43.471 13208.025 - 13308.849: 86.2386% ( 80) 00:08:43.471 13308.849 - 13409.674: 87.0117% ( 95) 00:08:43.471 13409.674 - 13510.498: 87.6953% ( 84) 00:08:43.471 13510.498 - 13611.323: 88.1755% ( 59) 00:08:43.471 13611.323 - 13712.148: 88.7777% ( 74) 00:08:43.471 13712.148 - 13812.972: 89.1764% ( 49) 00:08:43.471 13812.972 - 13913.797: 89.6973% ( 64) 00:08:43.471 13913.797 - 14014.622: 90.1611% ( 57) 00:08:43.471 14014.622 - 14115.446: 90.6576% ( 61) 00:08:43.471 14115.446 - 14216.271: 91.1377% ( 59) 00:08:43.471 14216.271 - 14317.095: 91.5039% ( 45) 00:08:43.471 14317.095 - 14417.920: 92.0003% ( 61) 00:08:43.471 14417.920 - 14518.745: 92.4235% ( 52) 00:08:43.471 14518.745 - 14619.569: 92.8385% ( 51) 00:08:43.471 14619.569 - 14720.394: 93.3757% ( 66) 00:08:43.471 14720.394 - 14821.218: 93.7012% ( 40) 00:08:43.471 14821.218 - 14922.043: 94.0755% ( 46) 00:08:43.472 14922.043 - 15022.868: 94.5068% ( 53) 00:08:43.472 15022.868 - 15123.692: 94.9626% ( 56) 00:08:43.472 15123.692 - 15224.517: 95.3613% ( 49) 00:08:43.472 15224.517 - 15325.342: 95.7438% ( 47) 00:08:43.472 15325.342 - 15426.166: 96.0938% ( 43) 00:08:43.472 15426.166 - 15526.991: 96.5007% ( 50) 00:08:43.472 15526.991 - 15627.815: 96.7448% ( 30) 00:08:43.472 15627.815 - 15728.640: 96.9971% ( 31) 00:08:43.472 15728.640 - 15829.465: 97.2493% ( 31) 00:08:43.472 15829.465 - 15930.289: 97.4935% ( 30) 00:08:43.472 15930.289 - 16031.114: 97.6725% ( 22) 00:08:43.472 16031.114 - 16131.938: 97.9492% ( 34) 00:08:43.472 16131.938 - 16232.763: 98.1445% ( 24) 00:08:43.472 16232.763 - 16333.588: 98.3154% ( 21) 00:08:43.472 16333.588 - 16434.412: 98.4375% ( 15) 00:08:43.472 16434.412 - 16535.237: 98.5352% ( 12) 00:08:43.472 16535.237 - 16636.062: 98.6410% ( 13) 00:08:43.472 16636.062 - 16736.886: 98.6979% ( 7) 00:08:43.472 16736.886 - 16837.711: 98.7386% ( 5) 00:08:43.472 16837.711 - 16938.535: 98.7956% ( 7) 00:08:43.472 16938.535 - 17039.360: 98.8200% ( 3) 00:08:43.472 17039.360 - 17140.185: 98.8444% ( 3) 00:08:43.472 17140.185 - 17241.009: 98.8770% ( 4) 00:08:43.472 17241.009 - 17341.834: 98.9095% ( 4) 00:08:43.472 17341.834 - 17442.658: 98.9583% ( 6) 00:08:43.472 17745.132 - 17845.957: 98.9746% ( 2) 00:08:43.472 17845.957 - 17946.782: 99.0072% ( 4) 00:08:43.472 17946.782 - 18047.606: 99.0316% ( 3) 00:08:43.472 18047.606 - 18148.431: 99.0804% ( 6) 00:08:43.472 18148.431 - 18249.255: 99.1048% ( 3) 00:08:43.472 18249.255 - 18350.080: 99.1292% ( 3) 00:08:43.472 18350.080 - 18450.905: 99.1699% ( 5) 00:08:43.472 18450.905 - 18551.729: 99.1943% ( 3) 00:08:43.472 18551.729 - 18652.554: 99.2269% ( 4) 00:08:43.472 18652.554 - 18753.378: 99.2594% ( 4) 00:08:43.472 18753.378 - 18854.203: 99.2920% ( 4) 00:08:43.472 18854.203 - 18955.028: 99.3327% ( 5) 00:08:43.472 18955.028 - 19055.852: 99.3571% ( 3) 00:08:43.472 19055.852 - 19156.677: 99.3978% ( 5) 00:08:43.472 19156.677 - 19257.502: 99.4222% ( 3) 00:08:43.472 19257.502 - 19358.326: 99.4710% ( 6) 00:08:43.472 19358.326 - 19459.151: 99.4873% ( 2) 00:08:43.472 19459.151 - 19559.975: 99.5117% ( 3) 00:08:43.472 19559.975 - 19660.800: 99.5443% ( 4) 00:08:43.472 19660.800 - 19761.625: 99.5768% ( 4) 00:08:43.472 19761.625 - 19862.449: 99.6094% ( 4) 00:08:43.472 19862.449 - 19963.274: 99.6501% ( 5) 00:08:43.472 19963.274 - 20064.098: 99.6826% ( 4) 00:08:43.472 20064.098 - 20164.923: 99.7070% ( 3) 00:08:43.472 20164.923 - 20265.748: 99.7396% ( 4) 00:08:43.472 20265.748 - 20366.572: 99.7803% ( 5) 00:08:43.472 20366.572 - 20467.397: 99.8128% ( 4) 00:08:43.472 20467.397 - 20568.222: 99.8535% ( 5) 00:08:43.472 20568.222 - 20669.046: 99.8779% ( 3) 00:08:43.472 20669.046 - 20769.871: 99.9105% ( 4) 00:08:43.472 20769.871 - 20870.695: 99.9512% ( 5) 00:08:43.472 20870.695 - 20971.520: 99.9837% ( 4) 00:08:43.472 20971.520 - 21072.345: 100.0000% ( 2) 00:08:43.472 00:08:43.472 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:43.472 ============================================================================== 00:08:43.472 Range in us Cumulative IO count 00:08:43.472 4940.406 - 4965.612: 0.0244% ( 3) 00:08:43.472 4965.612 - 4990.818: 0.0326% ( 1) 00:08:43.472 4990.818 - 5016.025: 0.0570% ( 3) 00:08:43.472 5016.025 - 5041.231: 0.0895% ( 4) 00:08:43.472 5041.231 - 5066.437: 0.1221% ( 4) 00:08:43.472 5066.437 - 5091.643: 0.1628% ( 5) 00:08:43.472 5091.643 - 5116.849: 0.2197% ( 7) 00:08:43.472 5116.849 - 5142.055: 0.3499% ( 16) 00:08:43.472 5142.055 - 5167.262: 0.5208% ( 21) 00:08:43.472 5167.262 - 5192.468: 0.6022% ( 10) 00:08:43.472 5192.468 - 5217.674: 0.6917% ( 11) 00:08:43.472 5217.674 - 5242.880: 0.8382% ( 18) 00:08:43.472 5242.880 - 5268.086: 1.1149% ( 34) 00:08:43.472 5268.086 - 5293.292: 1.3916% ( 34) 00:08:43.472 5293.292 - 5318.498: 1.6032% ( 26) 00:08:43.472 5318.498 - 5343.705: 1.9287% ( 40) 00:08:43.472 5343.705 - 5368.911: 2.1891% ( 32) 00:08:43.472 5368.911 - 5394.117: 2.4333% ( 30) 00:08:43.472 5394.117 - 5419.323: 2.6855% ( 31) 00:08:43.472 5419.323 - 5444.529: 2.9297% ( 30) 00:08:43.472 5444.529 - 5469.735: 3.2389% ( 38) 00:08:43.472 5469.735 - 5494.942: 3.5156% ( 34) 00:08:43.472 5494.942 - 5520.148: 3.7923% ( 34) 00:08:43.472 5520.148 - 5545.354: 4.1178% ( 40) 00:08:43.472 5545.354 - 5570.560: 4.5492% ( 53) 00:08:43.472 5570.560 - 5595.766: 4.9398% ( 48) 00:08:43.472 5595.766 - 5620.972: 5.2979% ( 44) 00:08:43.472 5620.972 - 5646.178: 5.6152% ( 39) 00:08:43.472 5646.178 - 5671.385: 5.9326% ( 39) 00:08:43.472 5671.385 - 5696.591: 6.2581% ( 40) 00:08:43.472 5696.591 - 5721.797: 6.5837% ( 40) 00:08:43.472 5721.797 - 5747.003: 6.9417% ( 44) 00:08:43.472 5747.003 - 5772.209: 7.2998% ( 44) 00:08:43.472 5772.209 - 5797.415: 7.6742% ( 46) 00:08:43.472 5797.415 - 5822.622: 8.0160% ( 42) 00:08:43.472 5822.622 - 5847.828: 8.3415% ( 40) 00:08:43.472 5847.828 - 5873.034: 8.6670% ( 40) 00:08:43.472 5873.034 - 5898.240: 8.9925% ( 40) 00:08:43.472 5898.240 - 5923.446: 9.3750% ( 47) 00:08:43.472 5923.446 - 5948.652: 9.6842% ( 38) 00:08:43.472 5948.652 - 5973.858: 10.0260% ( 42) 00:08:43.472 5973.858 - 5999.065: 10.3597% ( 41) 00:08:43.472 5999.065 - 6024.271: 10.6934% ( 41) 00:08:43.472 6024.271 - 6049.477: 11.0758% ( 47) 00:08:43.472 6049.477 - 6074.683: 11.3851% ( 38) 00:08:43.472 6074.683 - 6099.889: 11.7513% ( 45) 00:08:43.472 6099.889 - 6125.095: 12.0850% ( 41) 00:08:43.472 6125.095 - 6150.302: 12.4186% ( 41) 00:08:43.472 6150.302 - 6175.508: 12.7360% ( 39) 00:08:43.472 6175.508 - 6200.714: 13.1104% ( 46) 00:08:43.472 6200.714 - 6225.920: 13.4359% ( 40) 00:08:43.472 6225.920 - 6251.126: 13.8184% ( 47) 00:08:43.472 6251.126 - 6276.332: 14.1846% ( 45) 00:08:43.472 6276.332 - 6301.538: 14.5345% ( 43) 00:08:43.472 6301.538 - 6326.745: 14.8438% ( 38) 00:08:43.472 6326.745 - 6351.951: 15.2344% ( 48) 00:08:43.472 6351.951 - 6377.157: 15.5680% ( 41) 00:08:43.472 6377.157 - 6402.363: 15.9261% ( 44) 00:08:43.472 6402.363 - 6427.569: 16.2598% ( 41) 00:08:43.472 6427.569 - 6452.775: 16.6097% ( 43) 00:08:43.472 6452.775 - 6503.188: 17.3014% ( 85) 00:08:43.472 6503.188 - 6553.600: 17.9850% ( 84) 00:08:43.472 6553.600 - 6604.012: 18.5710% ( 72) 00:08:43.472 6604.012 - 6654.425: 19.1081% ( 66) 00:08:43.472 6654.425 - 6704.837: 19.5964% ( 60) 00:08:43.472 6704.837 - 6755.249: 20.0439% ( 55) 00:08:43.472 6755.249 - 6805.662: 20.4834% ( 54) 00:08:43.472 6805.662 - 6856.074: 20.7682% ( 35) 00:08:43.472 6856.074 - 6906.486: 21.0286% ( 32) 00:08:43.472 6906.486 - 6956.898: 21.3216% ( 36) 00:08:43.472 6956.898 - 7007.311: 21.5088% ( 23) 00:08:43.472 7007.311 - 7057.723: 21.6634% ( 19) 00:08:43.472 7057.723 - 7108.135: 21.7855% ( 15) 00:08:43.472 7108.135 - 7158.548: 21.8669% ( 10) 00:08:43.472 7158.548 - 7208.960: 21.9564% ( 11) 00:08:43.472 7208.960 - 7259.372: 22.0459% ( 11) 00:08:43.472 7259.372 - 7309.785: 22.1191% ( 9) 00:08:43.472 7309.785 - 7360.197: 22.2168% ( 12) 00:08:43.472 7360.197 - 7410.609: 22.3063% ( 11) 00:08:43.472 7410.609 - 7461.022: 22.3877% ( 10) 00:08:43.472 7461.022 - 7511.434: 22.4691% ( 10) 00:08:43.472 7511.434 - 7561.846: 22.5586% ( 11) 00:08:43.472 7561.846 - 7612.258: 22.6562% ( 12) 00:08:43.472 7612.258 - 7662.671: 22.7376% ( 10) 00:08:43.472 7662.671 - 7713.083: 22.8516% ( 14) 00:08:43.472 7713.083 - 7763.495: 23.0062% ( 19) 00:08:43.472 7763.495 - 7813.908: 23.1364% ( 16) 00:08:43.472 7813.908 - 7864.320: 23.2910% ( 19) 00:08:43.472 7864.320 - 7914.732: 23.4375% ( 18) 00:08:43.472 7914.732 - 7965.145: 23.5840% ( 18) 00:08:43.472 7965.145 - 8015.557: 23.7223% ( 17) 00:08:43.472 8015.557 - 8065.969: 23.8851% ( 20) 00:08:43.472 8065.969 - 8116.382: 24.0153% ( 16) 00:08:43.472 8116.382 - 8166.794: 24.1455% ( 16) 00:08:43.472 8166.794 - 8217.206: 24.2594% ( 14) 00:08:43.472 8217.206 - 8267.618: 24.3490% ( 11) 00:08:43.472 8267.618 - 8318.031: 24.4873% ( 17) 00:08:43.472 8318.031 - 8368.443: 24.6257% ( 17) 00:08:43.472 8368.443 - 8418.855: 24.7884% ( 20) 00:08:43.472 8418.855 - 8469.268: 24.9268% ( 17) 00:08:43.472 8469.268 - 8519.680: 25.1139% ( 23) 00:08:43.472 8519.680 - 8570.092: 25.2604% ( 18) 00:08:43.472 8570.092 - 8620.505: 25.4395% ( 22) 00:08:43.472 8620.505 - 8670.917: 25.6185% ( 22) 00:08:43.472 8670.917 - 8721.329: 25.8138% ( 24) 00:08:43.472 8721.329 - 8771.742: 26.0335% ( 27) 00:08:43.472 8771.742 - 8822.154: 26.2207% ( 23) 00:08:43.472 8822.154 - 8872.566: 26.4160% ( 24) 00:08:43.472 8872.566 - 8922.978: 26.6764% ( 32) 00:08:43.472 8922.978 - 8973.391: 26.9043% ( 28) 00:08:43.472 8973.391 - 9023.803: 27.1159% ( 26) 00:08:43.472 9023.803 - 9074.215: 27.3193% ( 25) 00:08:43.472 9074.215 - 9124.628: 27.5716% ( 31) 00:08:43.472 9124.628 - 9175.040: 27.8646% ( 36) 00:08:43.472 9175.040 - 9225.452: 28.1250% ( 32) 00:08:43.472 9225.452 - 9275.865: 28.5156% ( 48) 00:08:43.472 9275.865 - 9326.277: 28.8330% ( 39) 00:08:43.472 9326.277 - 9376.689: 29.1748% ( 42) 00:08:43.472 9376.689 - 9427.102: 29.4840% ( 38) 00:08:43.472 9427.102 - 9477.514: 29.8014% ( 39) 00:08:43.472 9477.514 - 9527.926: 30.0781% ( 34) 00:08:43.472 9527.926 - 9578.338: 30.4606% ( 47) 00:08:43.472 9578.338 - 9628.751: 30.8024% ( 42) 00:08:43.472 9628.751 - 9679.163: 31.1930% ( 48) 00:08:43.472 9679.163 - 9729.575: 31.6650% ( 58) 00:08:43.473 9729.575 - 9779.988: 32.1045% ( 54) 00:08:43.473 9779.988 - 9830.400: 32.5684% ( 57) 00:08:43.473 9830.400 - 9880.812: 33.0811% ( 63) 00:08:43.473 9880.812 - 9931.225: 33.6995% ( 76) 00:08:43.473 9931.225 - 9981.637: 34.2285% ( 65) 00:08:43.473 9981.637 - 10032.049: 34.8633% ( 78) 00:08:43.473 10032.049 - 10082.462: 35.4411% ( 71) 00:08:43.473 10082.462 - 10132.874: 36.0596% ( 76) 00:08:43.473 10132.874 - 10183.286: 36.8896% ( 102) 00:08:43.473 10183.286 - 10233.698: 37.6709% ( 96) 00:08:43.473 10233.698 - 10284.111: 38.4847% ( 100) 00:08:43.473 10284.111 - 10334.523: 39.3636% ( 108) 00:08:43.473 10334.523 - 10384.935: 40.4134% ( 129) 00:08:43.473 10384.935 - 10435.348: 41.4876% ( 132) 00:08:43.473 10435.348 - 10485.760: 42.5212% ( 127) 00:08:43.473 10485.760 - 10536.172: 43.5303% ( 124) 00:08:43.473 10536.172 - 10586.585: 44.5394% ( 124) 00:08:43.473 10586.585 - 10636.997: 45.5648% ( 126) 00:08:43.473 10636.997 - 10687.409: 46.6471% ( 133) 00:08:43.473 10687.409 - 10737.822: 47.7783% ( 139) 00:08:43.473 10737.822 - 10788.234: 48.9339% ( 142) 00:08:43.473 10788.234 - 10838.646: 50.0244% ( 134) 00:08:43.473 10838.646 - 10889.058: 51.1556% ( 139) 00:08:43.473 10889.058 - 10939.471: 52.2217% ( 131) 00:08:43.473 10939.471 - 10989.883: 53.3203% ( 135) 00:08:43.473 10989.883 - 11040.295: 54.4189% ( 135) 00:08:43.473 11040.295 - 11090.708: 55.4932% ( 132) 00:08:43.473 11090.708 - 11141.120: 56.5430% ( 129) 00:08:43.473 11141.120 - 11191.532: 57.5684% ( 126) 00:08:43.473 11191.532 - 11241.945: 58.6019% ( 127) 00:08:43.473 11241.945 - 11292.357: 59.6598% ( 130) 00:08:43.473 11292.357 - 11342.769: 60.6283% ( 119) 00:08:43.473 11342.769 - 11393.182: 61.6211% ( 122) 00:08:43.473 11393.182 - 11443.594: 62.5732% ( 117) 00:08:43.473 11443.594 - 11494.006: 63.5254% ( 117) 00:08:43.473 11494.006 - 11544.418: 64.5182% ( 122) 00:08:43.473 11544.418 - 11594.831: 65.3646% ( 104) 00:08:43.473 11594.831 - 11645.243: 66.3330% ( 119) 00:08:43.473 11645.243 - 11695.655: 67.2689% ( 115) 00:08:43.473 11695.655 - 11746.068: 68.1641% ( 110) 00:08:43.473 11746.068 - 11796.480: 69.0999% ( 115) 00:08:43.473 11796.480 - 11846.892: 69.9300% ( 102) 00:08:43.473 11846.892 - 11897.305: 70.7520% ( 101) 00:08:43.473 11897.305 - 11947.717: 71.5658% ( 100) 00:08:43.473 11947.717 - 11998.129: 72.3470% ( 96) 00:08:43.473 11998.129 - 12048.542: 73.1283% ( 96) 00:08:43.473 12048.542 - 12098.954: 73.8281% ( 86) 00:08:43.473 12098.954 - 12149.366: 74.4629% ( 78) 00:08:43.473 12149.366 - 12199.778: 75.0977% ( 78) 00:08:43.473 12199.778 - 12250.191: 75.7161% ( 76) 00:08:43.473 12250.191 - 12300.603: 76.3102% ( 73) 00:08:43.473 12300.603 - 12351.015: 76.9287% ( 76) 00:08:43.473 12351.015 - 12401.428: 77.5228% ( 73) 00:08:43.473 12401.428 - 12451.840: 78.0762% ( 68) 00:08:43.473 12451.840 - 12502.252: 78.6621% ( 72) 00:08:43.473 12502.252 - 12552.665: 79.1911% ( 65) 00:08:43.473 12552.665 - 12603.077: 79.6875% ( 61) 00:08:43.473 12603.077 - 12653.489: 80.2083% ( 64) 00:08:43.473 12653.489 - 12703.902: 80.6803% ( 58) 00:08:43.473 12703.902 - 12754.314: 81.1117% ( 53) 00:08:43.473 12754.314 - 12804.726: 81.5511% ( 54) 00:08:43.473 12804.726 - 12855.138: 81.9417% ( 48) 00:08:43.473 12855.138 - 12905.551: 82.3161% ( 46) 00:08:43.473 12905.551 - 13006.375: 83.0485% ( 90) 00:08:43.473 13006.375 - 13107.200: 83.7402% ( 85) 00:08:43.473 13107.200 - 13208.025: 84.4076% ( 82) 00:08:43.473 13208.025 - 13308.849: 85.0586% ( 80) 00:08:43.473 13308.849 - 13409.674: 85.6120% ( 68) 00:08:43.473 13409.674 - 13510.498: 86.1816% ( 70) 00:08:43.473 13510.498 - 13611.323: 86.7269% ( 67) 00:08:43.473 13611.323 - 13712.148: 87.2640% ( 66) 00:08:43.473 13712.148 - 13812.972: 87.7523% ( 60) 00:08:43.473 13812.972 - 13913.797: 88.2812% ( 65) 00:08:43.473 13913.797 - 14014.622: 88.7533% ( 58) 00:08:43.473 14014.622 - 14115.446: 89.3473% ( 73) 00:08:43.473 14115.446 - 14216.271: 89.8844% ( 66) 00:08:43.473 14216.271 - 14317.095: 90.4541% ( 70) 00:08:43.473 14317.095 - 14417.920: 91.0726% ( 76) 00:08:43.473 14417.920 - 14518.745: 91.6585% ( 72) 00:08:43.473 14518.745 - 14619.569: 92.1143% ( 56) 00:08:43.473 14619.569 - 14720.394: 92.5537% ( 54) 00:08:43.473 14720.394 - 14821.218: 92.9769% ( 52) 00:08:43.473 14821.218 - 14922.043: 93.4408% ( 57) 00:08:43.473 14922.043 - 15022.868: 93.9372% ( 61) 00:08:43.473 15022.868 - 15123.692: 94.4092% ( 58) 00:08:43.473 15123.692 - 15224.517: 94.8975% ( 60) 00:08:43.473 15224.517 - 15325.342: 95.3532% ( 56) 00:08:43.473 15325.342 - 15426.166: 95.8089% ( 56) 00:08:43.473 15426.166 - 15526.991: 96.2321% ( 52) 00:08:43.473 15526.991 - 15627.815: 96.7122% ( 59) 00:08:43.473 15627.815 - 15728.640: 97.0947% ( 47) 00:08:43.473 15728.640 - 15829.465: 97.4935% ( 49) 00:08:43.473 15829.465 - 15930.289: 97.8109% ( 39) 00:08:43.473 15930.289 - 16031.114: 97.9655% ( 19) 00:08:43.473 16031.114 - 16131.938: 98.1201% ( 19) 00:08:43.473 16131.938 - 16232.763: 98.2422% ( 15) 00:08:43.473 16232.763 - 16333.588: 98.3887% ( 18) 00:08:43.473 16333.588 - 16434.412: 98.5026% ( 14) 00:08:43.473 16434.412 - 16535.237: 98.5921% ( 11) 00:08:43.473 16535.237 - 16636.062: 98.6816% ( 11) 00:08:43.473 16636.062 - 16736.886: 98.7793% ( 12) 00:08:43.473 16736.886 - 16837.711: 98.8688% ( 11) 00:08:43.473 16837.711 - 16938.535: 98.9502% ( 10) 00:08:43.473 16938.535 - 17039.360: 98.9583% ( 1) 00:08:43.473 19358.326 - 19459.151: 98.9909% ( 4) 00:08:43.473 19459.151 - 19559.975: 99.0316% ( 5) 00:08:43.473 19559.975 - 19660.800: 99.0641% ( 4) 00:08:43.473 19660.800 - 19761.625: 99.1048% ( 5) 00:08:43.473 19761.625 - 19862.449: 99.1374% ( 4) 00:08:43.473 19862.449 - 19963.274: 99.1781% ( 5) 00:08:43.473 19963.274 - 20064.098: 99.2188% ( 5) 00:08:43.473 20064.098 - 20164.923: 99.2513% ( 4) 00:08:43.473 20164.923 - 20265.748: 99.2757% ( 3) 00:08:43.473 20265.748 - 20366.572: 99.3164% ( 5) 00:08:43.473 20366.572 - 20467.397: 99.3490% ( 4) 00:08:43.473 20467.397 - 20568.222: 99.3896% ( 5) 00:08:43.473 20568.222 - 20669.046: 99.4222% ( 4) 00:08:43.473 20669.046 - 20769.871: 99.4629% ( 5) 00:08:43.473 20769.871 - 20870.695: 99.5036% ( 5) 00:08:43.473 20870.695 - 20971.520: 99.5361% ( 4) 00:08:43.473 20971.520 - 21072.345: 99.5687% ( 4) 00:08:43.473 21072.345 - 21173.169: 99.6094% ( 5) 00:08:43.473 21173.169 - 21273.994: 99.6419% ( 4) 00:08:43.473 21273.994 - 21374.818: 99.6826% ( 5) 00:08:43.473 21374.818 - 21475.643: 99.7233% ( 5) 00:08:43.473 21475.643 - 21576.468: 99.7640% ( 5) 00:08:43.473 21576.468 - 21677.292: 99.7965% ( 4) 00:08:43.473 21677.292 - 21778.117: 99.8291% ( 4) 00:08:43.473 21778.117 - 21878.942: 99.8617% ( 4) 00:08:43.473 21878.942 - 21979.766: 99.9023% ( 5) 00:08:43.473 21979.766 - 22080.591: 99.9430% ( 5) 00:08:43.473 22080.591 - 22181.415: 99.9756% ( 4) 00:08:43.473 22181.415 - 22282.240: 100.0000% ( 3) 00:08:43.473 00:08:43.473 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:43.473 ============================================================================== 00:08:43.473 Range in us Cumulative IO count 00:08:43.473 4990.818 - 5016.025: 0.0081% ( 1) 00:08:43.473 5016.025 - 5041.231: 0.0244% ( 2) 00:08:43.473 5041.231 - 5066.437: 0.0326% ( 1) 00:08:43.473 5066.437 - 5091.643: 0.0488% ( 2) 00:08:43.473 5091.643 - 5116.849: 0.0732% ( 3) 00:08:43.473 5116.849 - 5142.055: 0.1139% ( 5) 00:08:43.473 5142.055 - 5167.262: 0.1628% ( 6) 00:08:43.473 5167.262 - 5192.468: 0.2441% ( 10) 00:08:43.473 5192.468 - 5217.674: 0.3662% ( 15) 00:08:43.473 5217.674 - 5242.880: 0.5859% ( 27) 00:08:43.473 5242.880 - 5268.086: 0.8545% ( 33) 00:08:43.473 5268.086 - 5293.292: 1.0417% ( 23) 00:08:43.473 5293.292 - 5318.498: 1.2614% ( 27) 00:08:43.473 5318.498 - 5343.705: 1.4974% ( 29) 00:08:43.473 5343.705 - 5368.911: 1.8066% ( 38) 00:08:43.473 5368.911 - 5394.117: 2.1322% ( 40) 00:08:43.473 5394.117 - 5419.323: 2.5472% ( 51) 00:08:43.473 5419.323 - 5444.529: 2.8483% ( 37) 00:08:43.473 5444.529 - 5469.735: 3.1169% ( 33) 00:08:43.473 5469.735 - 5494.942: 3.4180% ( 37) 00:08:43.473 5494.942 - 5520.148: 3.7191% ( 37) 00:08:43.473 5520.148 - 5545.354: 4.0283% ( 38) 00:08:43.473 5545.354 - 5570.560: 4.3701% ( 42) 00:08:43.473 5570.560 - 5595.766: 4.7119% ( 42) 00:08:43.473 5595.766 - 5620.972: 5.0456% ( 41) 00:08:43.473 5620.972 - 5646.178: 5.3955% ( 43) 00:08:43.473 5646.178 - 5671.385: 5.7210% ( 40) 00:08:43.473 5671.385 - 5696.591: 6.0303% ( 38) 00:08:43.473 5696.591 - 5721.797: 6.3639% ( 41) 00:08:43.473 5721.797 - 5747.003: 6.7057% ( 42) 00:08:43.473 5747.003 - 5772.209: 7.0231% ( 39) 00:08:43.473 5772.209 - 5797.415: 7.3405% ( 39) 00:08:43.473 5797.415 - 5822.622: 7.6823% ( 42) 00:08:43.473 5822.622 - 5847.828: 8.0160% ( 41) 00:08:43.473 5847.828 - 5873.034: 8.3415% ( 40) 00:08:43.473 5873.034 - 5898.240: 8.6751% ( 41) 00:08:43.473 5898.240 - 5923.446: 9.0007% ( 40) 00:08:43.473 5923.446 - 5948.652: 9.3180% ( 39) 00:08:43.473 5948.652 - 5973.858: 9.6598% ( 42) 00:08:43.473 5973.858 - 5999.065: 9.9854% ( 40) 00:08:43.473 5999.065 - 6024.271: 10.3516% ( 45) 00:08:43.473 6024.271 - 6049.477: 10.6934% ( 42) 00:08:43.473 6049.477 - 6074.683: 11.0596% ( 45) 00:08:43.473 6074.683 - 6099.889: 11.4176% ( 44) 00:08:43.473 6099.889 - 6125.095: 11.7513% ( 41) 00:08:43.473 6125.095 - 6150.302: 12.1094% ( 44) 00:08:43.473 6150.302 - 6175.508: 12.4674% ( 44) 00:08:43.473 6175.508 - 6200.714: 12.8011% ( 41) 00:08:43.473 6200.714 - 6225.920: 13.1836% ( 47) 00:08:43.473 6225.920 - 6251.126: 13.5742% ( 48) 00:08:43.473 6251.126 - 6276.332: 13.9404% ( 45) 00:08:43.473 6276.332 - 6301.538: 14.3148% ( 46) 00:08:43.474 6301.538 - 6326.745: 14.6973% ( 47) 00:08:43.474 6326.745 - 6351.951: 15.0798% ( 47) 00:08:43.474 6351.951 - 6377.157: 15.4460% ( 45) 00:08:43.474 6377.157 - 6402.363: 15.7959% ( 43) 00:08:43.474 6402.363 - 6427.569: 16.1865% ( 48) 00:08:43.474 6427.569 - 6452.775: 16.5446% ( 44) 00:08:43.474 6452.775 - 6503.188: 17.3014% ( 93) 00:08:43.474 6503.188 - 6553.600: 17.9688% ( 82) 00:08:43.474 6553.600 - 6604.012: 18.5872% ( 76) 00:08:43.474 6604.012 - 6654.425: 19.1976% ( 75) 00:08:43.474 6654.425 - 6704.837: 19.7754% ( 71) 00:08:43.474 6704.837 - 6755.249: 20.2311% ( 56) 00:08:43.474 6755.249 - 6805.662: 20.6462% ( 51) 00:08:43.474 6805.662 - 6856.074: 20.9717% ( 40) 00:08:43.474 6856.074 - 6906.486: 21.3135% ( 42) 00:08:43.474 6906.486 - 6956.898: 21.6227% ( 38) 00:08:43.474 6956.898 - 7007.311: 21.8587% ( 29) 00:08:43.474 7007.311 - 7057.723: 22.0215% ( 20) 00:08:43.474 7057.723 - 7108.135: 22.1680% ( 18) 00:08:43.474 7108.135 - 7158.548: 22.2819% ( 14) 00:08:43.474 7158.548 - 7208.960: 22.3958% ( 14) 00:08:43.474 7208.960 - 7259.372: 22.5098% ( 14) 00:08:43.474 7259.372 - 7309.785: 22.5911% ( 10) 00:08:43.474 7309.785 - 7360.197: 22.6807% ( 11) 00:08:43.474 7360.197 - 7410.609: 22.7702% ( 11) 00:08:43.474 7410.609 - 7461.022: 22.8923% ( 15) 00:08:43.474 7461.022 - 7511.434: 23.0143% ( 15) 00:08:43.474 7511.434 - 7561.846: 23.1364% ( 15) 00:08:43.474 7561.846 - 7612.258: 23.2747% ( 17) 00:08:43.474 7612.258 - 7662.671: 23.4212% ( 18) 00:08:43.474 7662.671 - 7713.083: 23.5758% ( 19) 00:08:43.474 7713.083 - 7763.495: 23.7061% ( 16) 00:08:43.474 7763.495 - 7813.908: 23.8444% ( 17) 00:08:43.474 7813.908 - 7864.320: 23.9909% ( 18) 00:08:43.474 7864.320 - 7914.732: 24.1130% ( 15) 00:08:43.474 7914.732 - 7965.145: 24.2350% ( 15) 00:08:43.474 7965.145 - 8015.557: 24.3652% ( 16) 00:08:43.474 8015.557 - 8065.969: 24.4954% ( 16) 00:08:43.474 8065.969 - 8116.382: 24.6257% ( 16) 00:08:43.474 8116.382 - 8166.794: 24.7477% ( 15) 00:08:43.474 8166.794 - 8217.206: 24.8861% ( 17) 00:08:43.474 8217.206 - 8267.618: 24.9837% ( 12) 00:08:43.474 8267.618 - 8318.031: 25.1058% ( 15) 00:08:43.474 8318.031 - 8368.443: 25.2279% ( 15) 00:08:43.474 8368.443 - 8418.855: 25.3743% ( 18) 00:08:43.474 8418.855 - 8469.268: 25.5290% ( 19) 00:08:43.474 8469.268 - 8519.680: 25.7406% ( 26) 00:08:43.474 8519.680 - 8570.092: 25.9196% ( 22) 00:08:43.474 8570.092 - 8620.505: 26.1149% ( 24) 00:08:43.474 8620.505 - 8670.917: 26.3102% ( 24) 00:08:43.474 8670.917 - 8721.329: 26.4974% ( 23) 00:08:43.474 8721.329 - 8771.742: 26.6927% ( 24) 00:08:43.474 8771.742 - 8822.154: 26.8880% ( 24) 00:08:43.474 8822.154 - 8872.566: 27.0915% ( 25) 00:08:43.474 8872.566 - 8922.978: 27.3031% ( 26) 00:08:43.474 8922.978 - 8973.391: 27.4902% ( 23) 00:08:43.474 8973.391 - 9023.803: 27.6611% ( 21) 00:08:43.474 9023.803 - 9074.215: 27.8239% ( 20) 00:08:43.474 9074.215 - 9124.628: 28.0273% ( 25) 00:08:43.474 9124.628 - 9175.040: 28.2064% ( 22) 00:08:43.474 9175.040 - 9225.452: 28.4180% ( 26) 00:08:43.474 9225.452 - 9275.865: 28.6702% ( 31) 00:08:43.474 9275.865 - 9326.277: 28.8411% ( 21) 00:08:43.474 9326.277 - 9376.689: 29.0690% ( 28) 00:08:43.474 9376.689 - 9427.102: 29.3213% ( 31) 00:08:43.474 9427.102 - 9477.514: 29.6387% ( 39) 00:08:43.474 9477.514 - 9527.926: 29.9398% ( 37) 00:08:43.474 9527.926 - 9578.338: 30.3304% ( 48) 00:08:43.474 9578.338 - 9628.751: 30.7617% ( 53) 00:08:43.474 9628.751 - 9679.163: 31.1605% ( 49) 00:08:43.474 9679.163 - 9729.575: 31.5999% ( 54) 00:08:43.474 9729.575 - 9779.988: 32.0394% ( 54) 00:08:43.474 9779.988 - 9830.400: 32.4544% ( 51) 00:08:43.474 9830.400 - 9880.812: 32.8939% ( 54) 00:08:43.474 9880.812 - 9931.225: 33.3659% ( 58) 00:08:43.474 9931.225 - 9981.637: 33.9030% ( 66) 00:08:43.474 9981.637 - 10032.049: 34.4889% ( 72) 00:08:43.474 10032.049 - 10082.462: 35.0505% ( 69) 00:08:43.474 10082.462 - 10132.874: 35.6852% ( 78) 00:08:43.474 10132.874 - 10183.286: 36.3037% ( 76) 00:08:43.474 10183.286 - 10233.698: 36.9222% ( 76) 00:08:43.474 10233.698 - 10284.111: 37.6139% ( 85) 00:08:43.474 10284.111 - 10334.523: 38.4359% ( 101) 00:08:43.474 10334.523 - 10384.935: 39.1602% ( 89) 00:08:43.474 10384.935 - 10435.348: 39.9984% ( 103) 00:08:43.474 10435.348 - 10485.760: 40.7389% ( 91) 00:08:43.474 10485.760 - 10536.172: 41.5283% ( 97) 00:08:43.474 10536.172 - 10586.585: 42.4723% ( 116) 00:08:43.474 10586.585 - 10636.997: 43.5059% ( 127) 00:08:43.474 10636.997 - 10687.409: 44.5964% ( 134) 00:08:43.474 10687.409 - 10737.822: 45.7194% ( 138) 00:08:43.474 10737.822 - 10788.234: 46.7855% ( 131) 00:08:43.474 10788.234 - 10838.646: 47.9004% ( 137) 00:08:43.474 10838.646 - 10889.058: 48.9827% ( 133) 00:08:43.474 10889.058 - 10939.471: 50.0651% ( 133) 00:08:43.474 10939.471 - 10989.883: 51.1637% ( 135) 00:08:43.474 10989.883 - 11040.295: 52.1566% ( 122) 00:08:43.474 11040.295 - 11090.708: 53.2552% ( 135) 00:08:43.474 11090.708 - 11141.120: 54.3620% ( 136) 00:08:43.474 11141.120 - 11191.532: 55.4118% ( 129) 00:08:43.474 11191.532 - 11241.945: 56.4616% ( 129) 00:08:43.474 11241.945 - 11292.357: 57.4951% ( 127) 00:08:43.474 11292.357 - 11342.769: 58.5938% ( 135) 00:08:43.474 11342.769 - 11393.182: 59.6029% ( 124) 00:08:43.474 11393.182 - 11443.594: 60.5957% ( 122) 00:08:43.474 11443.594 - 11494.006: 61.5479% ( 117) 00:08:43.474 11494.006 - 11544.418: 62.6790% ( 139) 00:08:43.474 11544.418 - 11594.831: 63.7939% ( 137) 00:08:43.474 11594.831 - 11645.243: 64.9333% ( 140) 00:08:43.474 11645.243 - 11695.655: 65.9912% ( 130) 00:08:43.474 11695.655 - 11746.068: 67.0085% ( 125) 00:08:43.474 11746.068 - 11796.480: 67.9525% ( 116) 00:08:43.474 11796.480 - 11846.892: 68.9128% ( 118) 00:08:43.474 11846.892 - 11897.305: 69.7998% ( 109) 00:08:43.474 11897.305 - 11947.717: 70.6624% ( 106) 00:08:43.474 11947.717 - 11998.129: 71.4844% ( 101) 00:08:43.474 11998.129 - 12048.542: 72.3551% ( 107) 00:08:43.474 12048.542 - 12098.954: 73.2096% ( 105) 00:08:43.474 12098.954 - 12149.366: 74.0153% ( 99) 00:08:43.474 12149.366 - 12199.778: 74.7803% ( 94) 00:08:43.474 12199.778 - 12250.191: 75.5941% ( 100) 00:08:43.474 12250.191 - 12300.603: 76.4079% ( 100) 00:08:43.474 12300.603 - 12351.015: 77.1077% ( 86) 00:08:43.474 12351.015 - 12401.428: 77.6855% ( 71) 00:08:43.474 12401.428 - 12451.840: 78.2633% ( 71) 00:08:43.474 12451.840 - 12502.252: 78.8493% ( 72) 00:08:43.474 12502.252 - 12552.665: 79.3701% ( 64) 00:08:43.474 12552.665 - 12603.077: 79.9072% ( 66) 00:08:43.474 12603.077 - 12653.489: 80.3955% ( 60) 00:08:43.474 12653.489 - 12703.902: 80.9326% ( 66) 00:08:43.474 12703.902 - 12754.314: 81.4616% ( 65) 00:08:43.474 12754.314 - 12804.726: 81.9336% ( 58) 00:08:43.474 12804.726 - 12855.138: 82.3893% ( 56) 00:08:43.474 12855.138 - 12905.551: 82.7555% ( 45) 00:08:43.474 12905.551 - 13006.375: 83.5531% ( 98) 00:08:43.474 13006.375 - 13107.200: 84.3099% ( 93) 00:08:43.474 13107.200 - 13208.025: 85.1400% ( 102) 00:08:43.474 13208.025 - 13308.849: 85.8805% ( 91) 00:08:43.474 13308.849 - 13409.674: 86.6618% ( 96) 00:08:43.474 13409.674 - 13510.498: 87.4349% ( 95) 00:08:43.474 13510.498 - 13611.323: 88.1999% ( 94) 00:08:43.474 13611.323 - 13712.148: 88.9160% ( 88) 00:08:43.474 13712.148 - 13812.972: 89.6077% ( 85) 00:08:43.474 13812.972 - 13913.797: 90.1937% ( 72) 00:08:43.474 13913.797 - 14014.622: 90.7145% ( 64) 00:08:43.474 14014.622 - 14115.446: 91.2679% ( 68) 00:08:43.474 14115.446 - 14216.271: 91.8701% ( 74) 00:08:43.474 14216.271 - 14317.095: 92.4235% ( 68) 00:08:43.474 14317.095 - 14417.920: 92.9688% ( 67) 00:08:43.474 14417.920 - 14518.745: 93.5140% ( 67) 00:08:43.474 14518.745 - 14619.569: 94.0104% ( 61) 00:08:43.474 14619.569 - 14720.394: 94.4906% ( 59) 00:08:43.474 14720.394 - 14821.218: 94.9951% ( 62) 00:08:43.474 14821.218 - 14922.043: 95.4346% ( 54) 00:08:43.474 14922.043 - 15022.868: 95.8333% ( 49) 00:08:43.474 15022.868 - 15123.692: 96.1670% ( 41) 00:08:43.474 15123.692 - 15224.517: 96.5088% ( 42) 00:08:43.474 15224.517 - 15325.342: 96.8506% ( 42) 00:08:43.474 15325.342 - 15426.166: 97.1436% ( 36) 00:08:43.474 15426.166 - 15526.991: 97.3796% ( 29) 00:08:43.474 15526.991 - 15627.815: 97.5830% ( 25) 00:08:43.474 15627.815 - 15728.640: 97.8109% ( 28) 00:08:43.474 15728.640 - 15829.465: 97.9736% ( 20) 00:08:43.474 15829.465 - 15930.289: 98.1364% ( 20) 00:08:43.474 15930.289 - 16031.114: 98.3073% ( 21) 00:08:43.474 16031.114 - 16131.938: 98.4212% ( 14) 00:08:43.474 16131.938 - 16232.763: 98.5189% ( 12) 00:08:43.474 16232.763 - 16333.588: 98.6247% ( 13) 00:08:43.474 16333.588 - 16434.412: 98.7386% ( 14) 00:08:43.474 16434.412 - 16535.237: 98.8444% ( 13) 00:08:43.474 16535.237 - 16636.062: 98.9339% ( 11) 00:08:43.474 16636.062 - 16736.886: 98.9583% ( 3) 00:08:43.474 21273.994 - 21374.818: 98.9909% ( 4) 00:08:43.474 21374.818 - 21475.643: 99.0153% ( 3) 00:08:43.474 21475.643 - 21576.468: 99.0723% ( 7) 00:08:43.474 21576.468 - 21677.292: 99.0967% ( 3) 00:08:43.474 21677.292 - 21778.117: 99.1292% ( 4) 00:08:43.474 21778.117 - 21878.942: 99.1618% ( 4) 00:08:43.474 21878.942 - 21979.766: 99.1943% ( 4) 00:08:43.474 21979.766 - 22080.591: 99.2269% ( 4) 00:08:43.474 22080.591 - 22181.415: 99.2676% ( 5) 00:08:43.474 22181.415 - 22282.240: 99.3083% ( 5) 00:08:43.474 22282.240 - 22383.065: 99.3408% ( 4) 00:08:43.474 22383.065 - 22483.889: 99.3815% ( 5) 00:08:43.475 22483.889 - 22584.714: 99.4222% ( 5) 00:08:43.475 22584.714 - 22685.538: 99.4548% ( 4) 00:08:43.475 22685.538 - 22786.363: 99.4954% ( 5) 00:08:43.475 22786.363 - 22887.188: 99.5280% ( 4) 00:08:43.475 22887.188 - 22988.012: 99.5605% ( 4) 00:08:43.475 22988.012 - 23088.837: 99.5931% ( 4) 00:08:43.475 23088.837 - 23189.662: 99.6338% ( 5) 00:08:43.475 23189.662 - 23290.486: 99.6663% ( 4) 00:08:43.475 23290.486 - 23391.311: 99.6989% ( 4) 00:08:43.475 23391.311 - 23492.135: 99.7396% ( 5) 00:08:43.475 23492.135 - 23592.960: 99.7721% ( 4) 00:08:43.475 23592.960 - 23693.785: 99.8128% ( 5) 00:08:43.475 23693.785 - 23794.609: 99.8535% ( 5) 00:08:43.475 23794.609 - 23895.434: 99.8942% ( 5) 00:08:43.475 23895.434 - 23996.258: 99.9349% ( 5) 00:08:43.475 23996.258 - 24097.083: 99.9756% ( 5) 00:08:43.475 24097.083 - 24197.908: 100.0000% ( 3) 00:08:43.475 00:08:43.475 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:43.475 ============================================================================== 00:08:43.475 Range in us Cumulative IO count 00:08:43.475 5091.643 - 5116.849: 0.0244% ( 3) 00:08:43.475 5116.849 - 5142.055: 0.0570% ( 4) 00:08:43.475 5142.055 - 5167.262: 0.1709% ( 14) 00:08:43.475 5167.262 - 5192.468: 0.2279% ( 7) 00:08:43.475 5192.468 - 5217.674: 0.3337% ( 13) 00:08:43.475 5217.674 - 5242.880: 0.4801% ( 18) 00:08:43.475 5242.880 - 5268.086: 0.6673% ( 23) 00:08:43.475 5268.086 - 5293.292: 0.9277% ( 32) 00:08:43.475 5293.292 - 5318.498: 1.1719% ( 30) 00:08:43.475 5318.498 - 5343.705: 1.3509% ( 22) 00:08:43.475 5343.705 - 5368.911: 1.6113% ( 32) 00:08:43.475 5368.911 - 5394.117: 1.9857% ( 46) 00:08:43.475 5394.117 - 5419.323: 2.4414% ( 56) 00:08:43.475 5419.323 - 5444.529: 2.7100% ( 33) 00:08:43.475 5444.529 - 5469.735: 2.9541% ( 30) 00:08:43.475 5469.735 - 5494.942: 3.2796% ( 40) 00:08:43.475 5494.942 - 5520.148: 3.6214% ( 42) 00:08:43.475 5520.148 - 5545.354: 3.9062% ( 35) 00:08:43.475 5545.354 - 5570.560: 4.2399% ( 41) 00:08:43.475 5570.560 - 5595.766: 4.5573% ( 39) 00:08:43.475 5595.766 - 5620.972: 4.8665% ( 38) 00:08:43.475 5620.972 - 5646.178: 5.2002% ( 41) 00:08:43.475 5646.178 - 5671.385: 5.5420% ( 42) 00:08:43.475 5671.385 - 5696.591: 5.8675% ( 40) 00:08:43.475 5696.591 - 5721.797: 6.1849% ( 39) 00:08:43.475 5721.797 - 5747.003: 6.5592% ( 46) 00:08:43.475 5747.003 - 5772.209: 6.8929% ( 41) 00:08:43.475 5772.209 - 5797.415: 7.2673% ( 46) 00:08:43.475 5797.415 - 5822.622: 7.5928% ( 40) 00:08:43.475 5822.622 - 5847.828: 7.9346% ( 42) 00:08:43.475 5847.828 - 5873.034: 8.3089% ( 46) 00:08:43.475 5873.034 - 5898.240: 8.6670% ( 44) 00:08:43.475 5898.240 - 5923.446: 9.0088% ( 42) 00:08:43.475 5923.446 - 5948.652: 9.3831% ( 46) 00:08:43.475 5948.652 - 5973.858: 9.7331% ( 43) 00:08:43.475 5973.858 - 5999.065: 10.0586% ( 40) 00:08:43.475 5999.065 - 6024.271: 10.4329% ( 46) 00:08:43.475 6024.271 - 6049.477: 10.7747% ( 42) 00:08:43.475 6049.477 - 6074.683: 11.1491% ( 46) 00:08:43.475 6074.683 - 6099.889: 11.4990% ( 43) 00:08:43.475 6099.889 - 6125.095: 11.8327% ( 41) 00:08:43.475 6125.095 - 6150.302: 12.1989% ( 45) 00:08:43.475 6150.302 - 6175.508: 12.5814% ( 47) 00:08:43.475 6175.508 - 6200.714: 12.9639% ( 47) 00:08:43.475 6200.714 - 6225.920: 13.3301% ( 45) 00:08:43.475 6225.920 - 6251.126: 13.7126% ( 47) 00:08:43.475 6251.126 - 6276.332: 14.1032% ( 48) 00:08:43.475 6276.332 - 6301.538: 14.4775% ( 46) 00:08:43.475 6301.538 - 6326.745: 14.8844% ( 50) 00:08:43.475 6326.745 - 6351.951: 15.2588% ( 46) 00:08:43.475 6351.951 - 6377.157: 15.6738% ( 51) 00:08:43.475 6377.157 - 6402.363: 16.0726% ( 49) 00:08:43.475 6402.363 - 6427.569: 16.4714% ( 49) 00:08:43.475 6427.569 - 6452.775: 16.8620% ( 48) 00:08:43.475 6452.775 - 6503.188: 17.6270% ( 94) 00:08:43.475 6503.188 - 6553.600: 18.3757% ( 92) 00:08:43.475 6553.600 - 6604.012: 19.0430% ( 82) 00:08:43.475 6604.012 - 6654.425: 19.7103% ( 82) 00:08:43.475 6654.425 - 6704.837: 20.3206% ( 75) 00:08:43.475 6704.837 - 6755.249: 20.8496% ( 65) 00:08:43.475 6755.249 - 6805.662: 21.3216% ( 58) 00:08:43.475 6805.662 - 6856.074: 21.6634% ( 42) 00:08:43.475 6856.074 - 6906.486: 21.9564% ( 36) 00:08:43.475 6906.486 - 6956.898: 22.1924% ( 29) 00:08:43.475 6956.898 - 7007.311: 22.3796% ( 23) 00:08:43.475 7007.311 - 7057.723: 22.5260% ( 18) 00:08:43.475 7057.723 - 7108.135: 22.6644% ( 17) 00:08:43.475 7108.135 - 7158.548: 22.7946% ( 16) 00:08:43.475 7158.548 - 7208.960: 22.9004% ( 13) 00:08:43.475 7208.960 - 7259.372: 22.9980% ( 12) 00:08:43.475 7259.372 - 7309.785: 23.0713% ( 9) 00:08:43.475 7309.785 - 7360.197: 23.1689% ( 12) 00:08:43.475 7360.197 - 7410.609: 23.2585% ( 11) 00:08:43.475 7410.609 - 7461.022: 23.3398% ( 10) 00:08:43.475 7461.022 - 7511.434: 23.4294% ( 11) 00:08:43.475 7511.434 - 7561.846: 23.4863% ( 7) 00:08:43.475 7561.846 - 7612.258: 23.5596% ( 9) 00:08:43.475 7612.258 - 7662.671: 23.6328% ( 9) 00:08:43.475 7662.671 - 7713.083: 23.7223% ( 11) 00:08:43.475 7713.083 - 7763.495: 23.8444% ( 15) 00:08:43.475 7763.495 - 7813.908: 23.9421% ( 12) 00:08:43.475 7813.908 - 7864.320: 24.0397% ( 12) 00:08:43.475 7864.320 - 7914.732: 24.1292% ( 11) 00:08:43.475 7914.732 - 7965.145: 24.2188% ( 11) 00:08:43.475 7965.145 - 8015.557: 24.3083% ( 11) 00:08:43.475 8015.557 - 8065.969: 24.3978% ( 11) 00:08:43.475 8065.969 - 8116.382: 24.4710% ( 9) 00:08:43.475 8116.382 - 8166.794: 24.5687% ( 12) 00:08:43.475 8166.794 - 8217.206: 24.6257% ( 7) 00:08:43.475 8217.206 - 8267.618: 24.6908% ( 8) 00:08:43.475 8267.618 - 8318.031: 24.7477% ( 7) 00:08:43.475 8318.031 - 8368.443: 24.8128% ( 8) 00:08:43.475 8368.443 - 8418.855: 24.8698% ( 7) 00:08:43.475 8418.855 - 8469.268: 24.9512% ( 10) 00:08:43.475 8469.268 - 8519.680: 25.0326% ( 10) 00:08:43.475 8519.680 - 8570.092: 25.1302% ( 12) 00:08:43.475 8570.092 - 8620.505: 25.2523% ( 15) 00:08:43.475 8620.505 - 8670.917: 25.4069% ( 19) 00:08:43.475 8670.917 - 8721.329: 25.5859% ( 22) 00:08:43.475 8721.329 - 8771.742: 25.7568% ( 21) 00:08:43.475 8771.742 - 8822.154: 25.9115% ( 19) 00:08:43.475 8822.154 - 8872.566: 26.0824% ( 21) 00:08:43.475 8872.566 - 8922.978: 26.2858% ( 25) 00:08:43.475 8922.978 - 8973.391: 26.4893% ( 25) 00:08:43.475 8973.391 - 9023.803: 26.7334% ( 30) 00:08:43.475 9023.803 - 9074.215: 26.9206% ( 23) 00:08:43.475 9074.215 - 9124.628: 27.1647% ( 30) 00:08:43.475 9124.628 - 9175.040: 27.4089% ( 30) 00:08:43.475 9175.040 - 9225.452: 27.6693% ( 32) 00:08:43.475 9225.452 - 9275.865: 27.8890% ( 27) 00:08:43.475 9275.865 - 9326.277: 28.1576% ( 33) 00:08:43.475 9326.277 - 9376.689: 28.3610% ( 25) 00:08:43.475 9376.689 - 9427.102: 28.5889% ( 28) 00:08:43.475 9427.102 - 9477.514: 28.8086% ( 27) 00:08:43.475 9477.514 - 9527.926: 29.0609% ( 31) 00:08:43.475 9527.926 - 9578.338: 29.3050% ( 30) 00:08:43.475 9578.338 - 9628.751: 29.5085% ( 25) 00:08:43.475 9628.751 - 9679.163: 29.7607% ( 31) 00:08:43.475 9679.163 - 9729.575: 30.0049% ( 30) 00:08:43.475 9729.575 - 9779.988: 30.2653% ( 32) 00:08:43.475 9779.988 - 9830.400: 30.5583% ( 36) 00:08:43.475 9830.400 - 9880.812: 31.0140% ( 56) 00:08:43.475 9880.812 - 9931.225: 31.5348% ( 64) 00:08:43.475 9931.225 - 9981.637: 32.0557% ( 64) 00:08:43.475 9981.637 - 10032.049: 32.5521% ( 61) 00:08:43.475 10032.049 - 10082.462: 33.1217% ( 70) 00:08:43.475 10082.462 - 10132.874: 33.7646% ( 79) 00:08:43.475 10132.874 - 10183.286: 34.4889% ( 89) 00:08:43.475 10183.286 - 10233.698: 35.2132% ( 89) 00:08:43.475 10233.698 - 10284.111: 36.0270% ( 100) 00:08:43.475 10284.111 - 10334.523: 36.6781% ( 80) 00:08:43.475 10334.523 - 10384.935: 37.4512% ( 95) 00:08:43.475 10384.935 - 10435.348: 38.2161% ( 94) 00:08:43.475 10435.348 - 10485.760: 39.1357% ( 113) 00:08:43.475 10485.760 - 10536.172: 39.9902% ( 105) 00:08:43.475 10536.172 - 10586.585: 40.9098% ( 113) 00:08:43.475 10586.585 - 10636.997: 41.9027% ( 122) 00:08:43.475 10636.997 - 10687.409: 42.9199% ( 125) 00:08:43.475 10687.409 - 10737.822: 44.0023% ( 133) 00:08:43.475 10737.822 - 10788.234: 45.0277% ( 126) 00:08:43.475 10788.234 - 10838.646: 46.0531% ( 126) 00:08:43.476 10838.646 - 10889.058: 47.1029% ( 129) 00:08:43.476 10889.058 - 10939.471: 48.2585% ( 142) 00:08:43.476 10939.471 - 10989.883: 49.3001% ( 128) 00:08:43.476 10989.883 - 11040.295: 50.3662% ( 131) 00:08:43.476 11040.295 - 11090.708: 51.4079% ( 128) 00:08:43.476 11090.708 - 11141.120: 52.4495% ( 128) 00:08:43.476 11141.120 - 11191.532: 53.5889% ( 140) 00:08:43.476 11191.532 - 11241.945: 54.6468% ( 130) 00:08:43.476 11241.945 - 11292.357: 55.7454% ( 135) 00:08:43.476 11292.357 - 11342.769: 56.9010% ( 142) 00:08:43.476 11342.769 - 11393.182: 58.0892% ( 146) 00:08:43.476 11393.182 - 11443.594: 59.2855% ( 147) 00:08:43.476 11443.594 - 11494.006: 60.4411% ( 142) 00:08:43.476 11494.006 - 11544.418: 61.5397% ( 135) 00:08:43.476 11544.418 - 11594.831: 62.6872% ( 141) 00:08:43.476 11594.831 - 11645.243: 63.8021% ( 137) 00:08:43.476 11645.243 - 11695.655: 64.8519% ( 129) 00:08:43.476 11695.655 - 11746.068: 65.9017% ( 129) 00:08:43.476 11746.068 - 11796.480: 66.9271% ( 126) 00:08:43.476 11796.480 - 11846.892: 67.9199% ( 122) 00:08:43.476 11846.892 - 11897.305: 68.8965% ( 120) 00:08:43.476 11897.305 - 11947.717: 69.9300% ( 127) 00:08:43.476 11947.717 - 11998.129: 71.0042% ( 132) 00:08:43.476 11998.129 - 12048.542: 72.0540% ( 129) 00:08:43.476 12048.542 - 12098.954: 72.9899% ( 115) 00:08:43.476 12098.954 - 12149.366: 73.9502% ( 118) 00:08:43.476 12149.366 - 12199.778: 74.8617% ( 112) 00:08:43.476 12199.778 - 12250.191: 75.7243% ( 106) 00:08:43.476 12250.191 - 12300.603: 76.5299% ( 99) 00:08:43.476 12300.603 - 12351.015: 77.4089% ( 108) 00:08:43.476 12351.015 - 12401.428: 78.1982% ( 97) 00:08:43.476 12401.428 - 12451.840: 78.8818% ( 84) 00:08:43.476 12451.840 - 12502.252: 79.5492% ( 82) 00:08:43.476 12502.252 - 12552.665: 80.1839% ( 78) 00:08:43.476 12552.665 - 12603.077: 80.7454% ( 69) 00:08:43.476 12603.077 - 12653.489: 81.2907% ( 67) 00:08:43.476 12653.489 - 12703.902: 81.8115% ( 64) 00:08:43.476 12703.902 - 12754.314: 82.3242% ( 63) 00:08:43.476 12754.314 - 12804.726: 82.8125% ( 60) 00:08:43.476 12804.726 - 12855.138: 83.3659% ( 68) 00:08:43.476 12855.138 - 12905.551: 83.9030% ( 66) 00:08:43.476 12905.551 - 13006.375: 85.0505% ( 141) 00:08:43.476 13006.375 - 13107.200: 86.1491% ( 135) 00:08:43.476 13107.200 - 13208.025: 87.2477% ( 135) 00:08:43.476 13208.025 - 13308.849: 88.2487% ( 123) 00:08:43.476 13308.849 - 13409.674: 89.2660% ( 125) 00:08:43.476 13409.674 - 13510.498: 90.2669% ( 123) 00:08:43.476 13510.498 - 13611.323: 91.0807% ( 100) 00:08:43.476 13611.323 - 13712.148: 91.7562% ( 83) 00:08:43.476 13712.148 - 13812.972: 92.4967% ( 91) 00:08:43.476 13812.972 - 13913.797: 93.0990% ( 74) 00:08:43.476 13913.797 - 14014.622: 93.5954% ( 61) 00:08:43.476 14014.622 - 14115.446: 94.0267% ( 53) 00:08:43.476 14115.446 - 14216.271: 94.4824% ( 56) 00:08:43.476 14216.271 - 14317.095: 94.9300% ( 55) 00:08:43.476 14317.095 - 14417.920: 95.3857% ( 56) 00:08:43.476 14417.920 - 14518.745: 95.8333% ( 55) 00:08:43.476 14518.745 - 14619.569: 96.2077% ( 46) 00:08:43.476 14619.569 - 14720.394: 96.5495% ( 42) 00:08:43.476 14720.394 - 14821.218: 96.8587% ( 38) 00:08:43.476 14821.218 - 14922.043: 97.1110% ( 31) 00:08:43.476 14922.043 - 15022.868: 97.3470% ( 29) 00:08:43.476 15022.868 - 15123.692: 97.5423% ( 24) 00:08:43.476 15123.692 - 15224.517: 97.6807% ( 17) 00:08:43.476 15224.517 - 15325.342: 97.8190% ( 17) 00:08:43.476 15325.342 - 15426.166: 97.9248% ( 13) 00:08:43.476 15426.166 - 15526.991: 98.0306% ( 13) 00:08:43.476 15526.991 - 15627.815: 98.1445% ( 14) 00:08:43.476 15627.815 - 15728.640: 98.2666% ( 15) 00:08:43.476 15728.640 - 15829.465: 98.3724% ( 13) 00:08:43.476 15829.465 - 15930.289: 98.4863% ( 14) 00:08:43.476 15930.289 - 16031.114: 98.6003% ( 14) 00:08:43.476 16031.114 - 16131.938: 98.6735% ( 9) 00:08:43.476 16131.938 - 16232.763: 98.7061% ( 4) 00:08:43.476 16232.763 - 16333.588: 98.7386% ( 4) 00:08:43.476 16333.588 - 16434.412: 98.7793% ( 5) 00:08:43.476 16434.412 - 16535.237: 98.8200% ( 5) 00:08:43.476 16535.237 - 16636.062: 98.8607% ( 5) 00:08:43.476 16636.062 - 16736.886: 98.8932% ( 4) 00:08:43.476 16736.886 - 16837.711: 98.9339% ( 5) 00:08:43.476 16837.711 - 16938.535: 98.9583% ( 3) 00:08:43.476 22786.363 - 22887.188: 98.9746% ( 2) 00:08:43.476 22887.188 - 22988.012: 99.0072% ( 4) 00:08:43.476 22988.012 - 23088.837: 99.0479% ( 5) 00:08:43.476 23088.837 - 23189.662: 99.0804% ( 4) 00:08:43.476 23189.662 - 23290.486: 99.1292% ( 6) 00:08:43.476 23290.486 - 23391.311: 99.1618% ( 4) 00:08:43.476 23391.311 - 23492.135: 99.2025% ( 5) 00:08:43.476 23492.135 - 23592.960: 99.2350% ( 4) 00:08:43.476 23592.960 - 23693.785: 99.2757% ( 5) 00:08:43.476 23693.785 - 23794.609: 99.3083% ( 4) 00:08:43.476 23794.609 - 23895.434: 99.3490% ( 5) 00:08:43.476 23895.434 - 23996.258: 99.3815% ( 4) 00:08:43.738 23996.258 - 24097.083: 99.4141% ( 4) 00:08:43.738 24097.083 - 24197.908: 99.4548% ( 5) 00:08:43.738 24197.908 - 24298.732: 99.4873% ( 4) 00:08:43.738 24298.732 - 24399.557: 99.5280% ( 5) 00:08:43.738 24399.557 - 24500.382: 99.5605% ( 4) 00:08:43.738 24500.382 - 24601.206: 99.5931% ( 4) 00:08:43.738 24601.206 - 24702.031: 99.6338% ( 5) 00:08:43.738 24702.031 - 24802.855: 99.6663% ( 4) 00:08:43.738 24802.855 - 24903.680: 99.7070% ( 5) 00:08:43.738 24903.680 - 25004.505: 99.7477% ( 5) 00:08:43.738 25004.505 - 25105.329: 99.7803% ( 4) 00:08:43.738 25105.329 - 25206.154: 99.8128% ( 4) 00:08:43.738 25206.154 - 25306.978: 99.8535% ( 5) 00:08:43.738 25306.978 - 25407.803: 99.8779% ( 3) 00:08:43.738 25407.803 - 25508.628: 99.9186% ( 5) 00:08:43.738 25508.628 - 25609.452: 99.9512% ( 4) 00:08:43.738 25609.452 - 25710.277: 99.9919% ( 5) 00:08:43.738 25710.277 - 25811.102: 100.0000% ( 1) 00:08:43.738 00:08:43.738 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:43.738 ============================================================================== 00:08:43.738 Range in us Cumulative IO count 00:08:43.738 5116.849 - 5142.055: 0.0570% ( 7) 00:08:43.738 5142.055 - 5167.262: 0.0814% ( 3) 00:08:43.738 5167.262 - 5192.468: 0.1302% ( 6) 00:08:43.738 5192.468 - 5217.674: 0.2360% ( 13) 00:08:43.738 5217.674 - 5242.880: 0.3825% ( 18) 00:08:43.738 5242.880 - 5268.086: 0.5208% ( 17) 00:08:43.738 5268.086 - 5293.292: 0.7650% ( 30) 00:08:43.738 5293.292 - 5318.498: 1.1230% ( 44) 00:08:43.738 5318.498 - 5343.705: 1.5055% ( 47) 00:08:43.738 5343.705 - 5368.911: 1.7334% ( 28) 00:08:43.738 5368.911 - 5394.117: 2.0345% ( 37) 00:08:43.738 5394.117 - 5419.323: 2.3275% ( 36) 00:08:43.738 5419.323 - 5444.529: 2.7018% ( 46) 00:08:43.738 5444.529 - 5469.735: 3.0518% ( 43) 00:08:43.738 5469.735 - 5494.942: 3.3854% ( 41) 00:08:43.738 5494.942 - 5520.148: 3.7598% ( 46) 00:08:43.738 5520.148 - 5545.354: 4.0771% ( 39) 00:08:43.738 5545.354 - 5570.560: 4.4596% ( 47) 00:08:43.738 5570.560 - 5595.766: 4.8096% ( 43) 00:08:43.738 5595.766 - 5620.972: 5.1595% ( 43) 00:08:43.738 5620.972 - 5646.178: 5.5094% ( 43) 00:08:43.738 5646.178 - 5671.385: 5.8594% ( 43) 00:08:43.738 5671.385 - 5696.591: 6.1849% ( 40) 00:08:43.738 5696.591 - 5721.797: 6.5186% ( 41) 00:08:43.738 5721.797 - 5747.003: 6.8604% ( 42) 00:08:43.738 5747.003 - 5772.209: 7.2347% ( 46) 00:08:43.738 5772.209 - 5797.415: 7.6497% ( 51) 00:08:43.738 5797.415 - 5822.622: 8.0566% ( 50) 00:08:43.738 5822.622 - 5847.828: 8.4229% ( 45) 00:08:43.738 5847.828 - 5873.034: 8.7809% ( 44) 00:08:43.738 5873.034 - 5898.240: 9.1309% ( 43) 00:08:43.738 5898.240 - 5923.446: 9.5133% ( 47) 00:08:43.738 5923.446 - 5948.652: 9.8877% ( 46) 00:08:43.738 5948.652 - 5973.858: 10.2783% ( 48) 00:08:43.738 5973.858 - 5999.065: 10.6527% ( 46) 00:08:43.738 5999.065 - 6024.271: 11.0352% ( 47) 00:08:43.738 6024.271 - 6049.477: 11.4258% ( 48) 00:08:43.738 6049.477 - 6074.683: 11.7757% ( 43) 00:08:43.738 6074.683 - 6099.889: 12.1419% ( 45) 00:08:43.738 6099.889 - 6125.095: 12.5326% ( 48) 00:08:43.738 6125.095 - 6150.302: 12.9069% ( 46) 00:08:43.738 6150.302 - 6175.508: 13.2812% ( 46) 00:08:43.738 6175.508 - 6200.714: 13.6393% ( 44) 00:08:43.738 6200.714 - 6225.920: 14.0137% ( 46) 00:08:43.738 6225.920 - 6251.126: 14.3962% ( 47) 00:08:43.738 6251.126 - 6276.332: 14.7868% ( 48) 00:08:43.738 6276.332 - 6301.538: 15.1530% ( 45) 00:08:43.738 6301.538 - 6326.745: 15.5273% ( 46) 00:08:43.738 6326.745 - 6351.951: 15.9587% ( 53) 00:08:43.738 6351.951 - 6377.157: 16.3330% ( 46) 00:08:43.738 6377.157 - 6402.363: 16.7399% ( 50) 00:08:43.738 6402.363 - 6427.569: 17.1143% ( 46) 00:08:43.738 6427.569 - 6452.775: 17.4967% ( 47) 00:08:43.738 6452.775 - 6503.188: 18.2536% ( 93) 00:08:43.738 6503.188 - 6553.600: 18.9535% ( 86) 00:08:43.738 6553.600 - 6604.012: 19.6533% ( 86) 00:08:43.738 6604.012 - 6654.425: 20.3288% ( 83) 00:08:43.738 6654.425 - 6704.837: 20.9554% ( 77) 00:08:43.738 6704.837 - 6755.249: 21.5332% ( 71) 00:08:43.738 6755.249 - 6805.662: 21.9482% ( 51) 00:08:43.738 6805.662 - 6856.074: 22.2493% ( 37) 00:08:43.738 6856.074 - 6906.486: 22.4447% ( 24) 00:08:43.738 6906.486 - 6956.898: 22.6237% ( 22) 00:08:43.738 6956.898 - 7007.311: 22.8109% ( 23) 00:08:43.738 7007.311 - 7057.723: 22.9248% ( 14) 00:08:43.738 7057.723 - 7108.135: 23.0387% ( 14) 00:08:43.738 7108.135 - 7158.548: 23.1201% ( 10) 00:08:43.738 7158.548 - 7208.960: 23.2096% ( 11) 00:08:43.738 7208.960 - 7259.372: 23.2992% ( 11) 00:08:43.738 7259.372 - 7309.785: 23.3805% ( 10) 00:08:43.738 7309.785 - 7360.197: 23.4375% ( 7) 00:08:43.738 7360.197 - 7410.609: 23.4945% ( 7) 00:08:43.738 7410.609 - 7461.022: 23.5270% ( 4) 00:08:43.738 7461.022 - 7511.434: 23.5514% ( 3) 00:08:43.738 7511.434 - 7561.846: 23.6165% ( 8) 00:08:43.738 7561.846 - 7612.258: 23.6654% ( 6) 00:08:43.738 7612.258 - 7662.671: 23.7305% ( 8) 00:08:43.738 7662.671 - 7713.083: 23.7874% ( 7) 00:08:43.738 7713.083 - 7763.495: 23.8525% ( 8) 00:08:43.738 7763.495 - 7813.908: 23.9176% ( 8) 00:08:43.738 7813.908 - 7864.320: 23.9665% ( 6) 00:08:43.738 7864.320 - 7914.732: 24.0316% ( 8) 00:08:43.738 7914.732 - 7965.145: 24.0967% ( 8) 00:08:43.738 7965.145 - 8015.557: 24.1943% ( 12) 00:08:43.738 8015.557 - 8065.969: 24.2839% ( 11) 00:08:43.738 8065.969 - 8116.382: 24.3815% ( 12) 00:08:43.738 8116.382 - 8166.794: 24.4629% ( 10) 00:08:43.738 8166.794 - 8217.206: 24.5605% ( 12) 00:08:43.738 8217.206 - 8267.618: 24.6257% ( 8) 00:08:43.738 8267.618 - 8318.031: 24.6826% ( 7) 00:08:43.738 8318.031 - 8368.443: 24.7396% ( 7) 00:08:43.738 8368.443 - 8418.855: 24.8047% ( 8) 00:08:43.738 8418.855 - 8469.268: 24.8698% ( 8) 00:08:43.738 8469.268 - 8519.680: 24.9186% ( 6) 00:08:43.738 8519.680 - 8570.092: 24.9837% ( 8) 00:08:43.738 8570.092 - 8620.505: 25.0488% ( 8) 00:08:43.738 8620.505 - 8670.917: 25.1383% ( 11) 00:08:43.739 8670.917 - 8721.329: 25.2197% ( 10) 00:08:43.739 8721.329 - 8771.742: 25.3255% ( 13) 00:08:43.739 8771.742 - 8822.154: 25.4557% ( 16) 00:08:43.739 8822.154 - 8872.566: 25.5778% ( 15) 00:08:43.739 8872.566 - 8922.978: 25.6836% ( 13) 00:08:43.739 8922.978 - 8973.391: 25.8057% ( 15) 00:08:43.739 8973.391 - 9023.803: 25.9196% ( 14) 00:08:43.739 9023.803 - 9074.215: 26.0905% ( 21) 00:08:43.739 9074.215 - 9124.628: 26.3346% ( 30) 00:08:43.739 9124.628 - 9175.040: 26.5625% ( 28) 00:08:43.739 9175.040 - 9225.452: 26.7741% ( 26) 00:08:43.739 9225.452 - 9275.865: 27.0020% ( 28) 00:08:43.739 9275.865 - 9326.277: 27.2217% ( 27) 00:08:43.739 9326.277 - 9376.689: 27.4577% ( 29) 00:08:43.739 9376.689 - 9427.102: 27.6774% ( 27) 00:08:43.739 9427.102 - 9477.514: 27.8971% ( 27) 00:08:43.739 9477.514 - 9527.926: 28.1413% ( 30) 00:08:43.739 9527.926 - 9578.338: 28.3610% ( 27) 00:08:43.739 9578.338 - 9628.751: 28.5970% ( 29) 00:08:43.739 9628.751 - 9679.163: 28.8167% ( 27) 00:08:43.739 9679.163 - 9729.575: 29.0690% ( 31) 00:08:43.739 9729.575 - 9779.988: 29.3376% ( 33) 00:08:43.739 9779.988 - 9830.400: 29.6224% ( 35) 00:08:43.739 9830.400 - 9880.812: 29.9561% ( 41) 00:08:43.739 9880.812 - 9931.225: 30.3711% ( 51) 00:08:43.739 9931.225 - 9981.637: 30.9082% ( 66) 00:08:43.739 9981.637 - 10032.049: 31.4535% ( 67) 00:08:43.739 10032.049 - 10082.462: 32.0638% ( 75) 00:08:43.739 10082.462 - 10132.874: 32.6579% ( 73) 00:08:43.739 10132.874 - 10183.286: 33.3171% ( 81) 00:08:43.739 10183.286 - 10233.698: 34.0007% ( 84) 00:08:43.739 10233.698 - 10284.111: 34.6761% ( 83) 00:08:43.739 10284.111 - 10334.523: 35.4329% ( 93) 00:08:43.739 10334.523 - 10384.935: 36.3200% ( 109) 00:08:43.739 10384.935 - 10435.348: 37.2965% ( 120) 00:08:43.739 10435.348 - 10485.760: 38.2650% ( 119) 00:08:43.739 10485.760 - 10536.172: 39.2822% ( 125) 00:08:43.739 10536.172 - 10586.585: 40.2913% ( 124) 00:08:43.739 10586.585 - 10636.997: 41.4225% ( 139) 00:08:43.739 10636.997 - 10687.409: 42.5212% ( 135) 00:08:43.739 10687.409 - 10737.822: 43.5954% ( 132) 00:08:43.739 10737.822 - 10788.234: 44.7510% ( 142) 00:08:43.739 10788.234 - 10838.646: 45.9229% ( 144) 00:08:43.739 10838.646 - 10889.058: 47.0866% ( 143) 00:08:43.739 10889.058 - 10939.471: 48.3480% ( 155) 00:08:43.739 10939.471 - 10989.883: 49.6663% ( 162) 00:08:43.739 10989.883 - 11040.295: 50.9196% ( 154) 00:08:43.739 11040.295 - 11090.708: 52.0426% ( 138) 00:08:43.739 11090.708 - 11141.120: 53.2389% ( 147) 00:08:43.739 11141.120 - 11191.532: 54.3701% ( 139) 00:08:43.739 11191.532 - 11241.945: 55.5990% ( 151) 00:08:43.739 11241.945 - 11292.357: 56.8359% ( 152) 00:08:43.739 11292.357 - 11342.769: 58.0729% ( 152) 00:08:43.739 11342.769 - 11393.182: 59.2448% ( 144) 00:08:43.739 11393.182 - 11443.594: 60.5387% ( 159) 00:08:43.739 11443.594 - 11494.006: 61.8490% ( 161) 00:08:43.739 11494.006 - 11544.418: 63.1266% ( 157) 00:08:43.739 11544.418 - 11594.831: 64.4124% ( 158) 00:08:43.739 11594.831 - 11645.243: 65.7064% ( 159) 00:08:43.739 11645.243 - 11695.655: 67.0247% ( 162) 00:08:43.739 11695.655 - 11746.068: 68.1966% ( 144) 00:08:43.739 11746.068 - 11796.480: 69.3441% ( 141) 00:08:43.739 11796.480 - 11846.892: 70.4997% ( 142) 00:08:43.739 11846.892 - 11897.305: 71.5251% ( 126) 00:08:43.739 11897.305 - 11947.717: 72.5423% ( 125) 00:08:43.739 11947.717 - 11998.129: 73.5758% ( 127) 00:08:43.739 11998.129 - 12048.542: 74.5280% ( 117) 00:08:43.739 12048.542 - 12098.954: 75.4557% ( 114) 00:08:43.739 12098.954 - 12149.366: 76.3509% ( 110) 00:08:43.739 12149.366 - 12199.778: 77.1322% ( 96) 00:08:43.739 12199.778 - 12250.191: 77.8727% ( 91) 00:08:43.739 12250.191 - 12300.603: 78.5482% ( 83) 00:08:43.739 12300.603 - 12351.015: 79.2318% ( 84) 00:08:43.739 12351.015 - 12401.428: 79.8828% ( 80) 00:08:43.739 12401.428 - 12451.840: 80.5013% ( 76) 00:08:43.739 12451.840 - 12502.252: 81.0954% ( 73) 00:08:43.739 12502.252 - 12552.665: 81.6406% ( 67) 00:08:43.739 12552.665 - 12603.077: 82.1696% ( 65) 00:08:43.739 12603.077 - 12653.489: 82.6986% ( 65) 00:08:43.739 12653.489 - 12703.902: 83.1217% ( 52) 00:08:43.739 12703.902 - 12754.314: 83.5368% ( 51) 00:08:43.739 12754.314 - 12804.726: 83.9193% ( 47) 00:08:43.739 12804.726 - 12855.138: 84.3180% ( 49) 00:08:43.739 12855.138 - 12905.551: 84.7168% ( 49) 00:08:43.739 12905.551 - 13006.375: 85.5143% ( 98) 00:08:43.739 13006.375 - 13107.200: 86.3851% ( 107) 00:08:43.739 13107.200 - 13208.025: 87.2803% ( 110) 00:08:43.739 13208.025 - 13308.849: 88.0208% ( 91) 00:08:43.739 13308.849 - 13409.674: 88.6963% ( 83) 00:08:43.739 13409.674 - 13510.498: 89.5345% ( 103) 00:08:43.739 13510.498 - 13611.323: 90.2588% ( 89) 00:08:43.739 13611.323 - 13712.148: 90.9587% ( 86) 00:08:43.739 13712.148 - 13812.972: 91.6585% ( 86) 00:08:43.739 13812.972 - 13913.797: 92.2852% ( 77) 00:08:43.739 13913.797 - 14014.622: 92.8385% ( 68) 00:08:43.739 14014.622 - 14115.446: 93.3757% ( 66) 00:08:43.739 14115.446 - 14216.271: 93.9372% ( 69) 00:08:43.739 14216.271 - 14317.095: 94.4580% ( 64) 00:08:43.739 14317.095 - 14417.920: 94.9544% ( 61) 00:08:43.739 14417.920 - 14518.745: 95.4264% ( 58) 00:08:43.739 14518.745 - 14619.569: 95.8252% ( 49) 00:08:43.739 14619.569 - 14720.394: 96.2809% ( 56) 00:08:43.739 14720.394 - 14821.218: 96.7204% ( 54) 00:08:43.739 14821.218 - 14922.043: 97.1110% ( 48) 00:08:43.739 14922.043 - 15022.868: 97.4609% ( 43) 00:08:43.739 15022.868 - 15123.692: 97.7051% ( 30) 00:08:43.739 15123.692 - 15224.517: 97.8678% ( 20) 00:08:43.739 15224.517 - 15325.342: 97.9980% ( 16) 00:08:43.739 15325.342 - 15426.166: 98.0957% ( 12) 00:08:43.739 15426.166 - 15526.991: 98.1852% ( 11) 00:08:43.739 15526.991 - 15627.815: 98.2747% ( 11) 00:08:43.739 15627.815 - 15728.640: 98.3398% ( 8) 00:08:43.739 15728.640 - 15829.465: 98.3805% ( 5) 00:08:43.739 15829.465 - 15930.289: 98.4212% ( 5) 00:08:43.739 15930.289 - 16031.114: 98.4619% ( 5) 00:08:43.739 16031.114 - 16131.938: 98.5026% ( 5) 00:08:43.739 16131.938 - 16232.763: 98.5433% ( 5) 00:08:43.739 16232.763 - 16333.588: 98.5840% ( 5) 00:08:43.739 16333.588 - 16434.412: 98.6165% ( 4) 00:08:43.739 16434.412 - 16535.237: 98.6572% ( 5) 00:08:43.739 16535.237 - 16636.062: 98.6979% ( 5) 00:08:43.739 16636.062 - 16736.886: 98.7386% ( 5) 00:08:43.739 16736.886 - 16837.711: 98.7793% ( 5) 00:08:43.739 16837.711 - 16938.535: 98.8200% ( 5) 00:08:43.739 16938.535 - 17039.360: 98.8607% ( 5) 00:08:43.739 17039.360 - 17140.185: 98.9014% ( 5) 00:08:43.739 17140.185 - 17241.009: 98.9421% ( 5) 00:08:43.739 17241.009 - 17341.834: 98.9583% ( 2) 00:08:43.739 23996.258 - 24097.083: 98.9746% ( 2) 00:08:43.739 24097.083 - 24197.908: 99.0153% ( 5) 00:08:43.739 24197.908 - 24298.732: 99.0479% ( 4) 00:08:43.739 24298.732 - 24399.557: 99.0885% ( 5) 00:08:43.739 24399.557 - 24500.382: 99.1211% ( 4) 00:08:43.739 24500.382 - 24601.206: 99.1618% ( 5) 00:08:43.739 24601.206 - 24702.031: 99.2106% ( 6) 00:08:43.739 24702.031 - 24802.855: 99.2513% ( 5) 00:08:43.739 24802.855 - 24903.680: 99.2757% ( 3) 00:08:43.739 24903.680 - 25004.505: 99.3083% ( 4) 00:08:43.739 25004.505 - 25105.329: 99.3490% ( 5) 00:08:43.739 25105.329 - 25206.154: 99.3896% ( 5) 00:08:43.739 25206.154 - 25306.978: 99.4466% ( 7) 00:08:43.739 25306.978 - 25407.803: 99.5036% ( 7) 00:08:43.739 25407.803 - 25508.628: 99.5524% ( 6) 00:08:43.739 25508.628 - 25609.452: 99.6175% ( 8) 00:08:43.739 25609.452 - 25710.277: 99.6826% ( 8) 00:08:43.739 25710.277 - 25811.102: 99.7396% ( 7) 00:08:43.739 25811.102 - 26012.751: 99.8617% ( 15) 00:08:43.739 26012.751 - 26214.400: 99.9837% ( 15) 00:08:43.739 26214.400 - 26416.049: 100.0000% ( 2) 00:08:43.739 00:08:43.739 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:43.739 ============================================================================== 00:08:43.739 Range in us Cumulative IO count 00:08:43.739 4360.665 - 4385.871: 0.0244% ( 3) 00:08:43.739 4385.871 - 4411.077: 0.0326% ( 1) 00:08:43.739 4411.077 - 4436.283: 0.0488% ( 2) 00:08:43.739 4436.283 - 4461.489: 0.0570% ( 1) 00:08:43.739 4461.489 - 4486.695: 0.0732% ( 2) 00:08:43.739 4486.695 - 4511.902: 0.0895% ( 2) 00:08:43.739 4511.902 - 4537.108: 0.1139% ( 3) 00:08:43.739 4537.108 - 4562.314: 0.1465% ( 4) 00:08:43.739 4562.314 - 4587.520: 0.1628% ( 2) 00:08:43.739 4587.520 - 4612.726: 0.1953% ( 4) 00:08:43.739 4612.726 - 4637.932: 0.2035% ( 1) 00:08:43.739 4637.932 - 4663.138: 0.2279% ( 3) 00:08:43.739 4663.138 - 4688.345: 0.2523% ( 3) 00:08:43.739 4688.345 - 4713.551: 0.2686% ( 2) 00:08:43.739 4713.551 - 4738.757: 0.2848% ( 2) 00:08:43.739 4738.757 - 4763.963: 0.3011% ( 2) 00:08:43.739 4763.963 - 4789.169: 0.3174% ( 2) 00:08:43.739 4789.169 - 4814.375: 0.3337% ( 2) 00:08:43.739 4814.375 - 4839.582: 0.3499% ( 2) 00:08:43.739 4839.582 - 4864.788: 0.3743% ( 3) 00:08:43.739 4864.788 - 4889.994: 0.3825% ( 1) 00:08:43.739 4889.994 - 4915.200: 0.3906% ( 1) 00:08:43.739 4915.200 - 4940.406: 0.4069% ( 2) 00:08:43.739 4940.406 - 4965.612: 0.4232% ( 2) 00:08:43.739 4965.612 - 4990.818: 0.4476% ( 3) 00:08:43.739 4990.818 - 5016.025: 0.4639% ( 2) 00:08:43.739 5016.025 - 5041.231: 0.4801% ( 2) 00:08:43.739 5041.231 - 5066.437: 0.4964% ( 2) 00:08:43.739 5066.437 - 5091.643: 0.5127% ( 2) 00:08:43.739 5091.643 - 5116.849: 0.5371% ( 3) 00:08:43.739 5116.849 - 5142.055: 0.5534% ( 2) 00:08:43.739 5142.055 - 5167.262: 0.6266% ( 9) 00:08:43.740 5167.262 - 5192.468: 0.6999% ( 9) 00:08:43.740 5192.468 - 5217.674: 0.7812% ( 10) 00:08:43.740 5217.674 - 5242.880: 1.0010% ( 27) 00:08:43.740 5242.880 - 5268.086: 1.1637% ( 20) 00:08:43.740 5268.086 - 5293.292: 1.3997% ( 29) 00:08:43.740 5293.292 - 5318.498: 1.6032% ( 25) 00:08:43.740 5318.498 - 5343.705: 1.8799% ( 34) 00:08:43.740 5343.705 - 5368.911: 2.2298% ( 43) 00:08:43.740 5368.911 - 5394.117: 2.5879% ( 44) 00:08:43.740 5394.117 - 5419.323: 2.9053% ( 39) 00:08:43.740 5419.323 - 5444.529: 3.2552% ( 43) 00:08:43.740 5444.529 - 5469.735: 3.7028% ( 55) 00:08:43.740 5469.735 - 5494.942: 4.0934% ( 48) 00:08:43.740 5494.942 - 5520.148: 4.3864% ( 36) 00:08:43.740 5520.148 - 5545.354: 4.7363% ( 43) 00:08:43.740 5545.354 - 5570.560: 5.0374% ( 37) 00:08:43.740 5570.560 - 5595.766: 5.4281% ( 48) 00:08:43.740 5595.766 - 5620.972: 5.7943% ( 45) 00:08:43.740 5620.972 - 5646.178: 6.1686% ( 46) 00:08:43.740 5646.178 - 5671.385: 6.5023% ( 41) 00:08:43.740 5671.385 - 5696.591: 6.8766% ( 46) 00:08:43.740 5696.591 - 5721.797: 7.2510% ( 46) 00:08:43.740 5721.797 - 5747.003: 7.6172% ( 45) 00:08:43.740 5747.003 - 5772.209: 7.9753% ( 44) 00:08:43.740 5772.209 - 5797.415: 8.3171% ( 42) 00:08:43.740 5797.415 - 5822.622: 8.6833% ( 45) 00:08:43.740 5822.622 - 5847.828: 9.0576% ( 46) 00:08:43.740 5847.828 - 5873.034: 9.3994% ( 42) 00:08:43.740 5873.034 - 5898.240: 9.7656% ( 45) 00:08:43.740 5898.240 - 5923.446: 10.1074% ( 42) 00:08:43.740 5923.446 - 5948.652: 10.4411% ( 41) 00:08:43.740 5948.652 - 5973.858: 10.7829% ( 42) 00:08:43.740 5973.858 - 5999.065: 11.1084% ( 40) 00:08:43.740 5999.065 - 6024.271: 11.4421% ( 41) 00:08:43.740 6024.271 - 6049.477: 11.8083% ( 45) 00:08:43.740 6049.477 - 6074.683: 12.1501% ( 42) 00:08:43.740 6074.683 - 6099.889: 12.5244% ( 46) 00:08:43.740 6099.889 - 6125.095: 12.8988% ( 46) 00:08:43.740 6125.095 - 6150.302: 13.2568% ( 44) 00:08:43.740 6150.302 - 6175.508: 13.6149% ( 44) 00:08:43.740 6175.508 - 6200.714: 14.0055% ( 48) 00:08:43.740 6200.714 - 6225.920: 14.3799% ( 46) 00:08:43.740 6225.920 - 6251.126: 14.7786% ( 49) 00:08:43.740 6251.126 - 6276.332: 15.1367% ( 44) 00:08:43.740 6276.332 - 6301.538: 15.5029% ( 45) 00:08:43.740 6301.538 - 6326.745: 15.8447% ( 42) 00:08:43.740 6326.745 - 6351.951: 16.2191% ( 46) 00:08:43.740 6351.951 - 6377.157: 16.6016% ( 47) 00:08:43.740 6377.157 - 6402.363: 17.0085% ( 50) 00:08:43.740 6402.363 - 6427.569: 17.3910% ( 47) 00:08:43.740 6427.569 - 6452.775: 17.7327% ( 42) 00:08:43.740 6452.775 - 6503.188: 18.5059% ( 95) 00:08:43.740 6503.188 - 6553.600: 19.2057% ( 86) 00:08:43.740 6553.600 - 6604.012: 19.8324% ( 77) 00:08:43.740 6604.012 - 6654.425: 20.4671% ( 78) 00:08:43.740 6654.425 - 6704.837: 21.0368% ( 70) 00:08:43.740 6704.837 - 6755.249: 21.4681% ( 53) 00:08:43.740 6755.249 - 6805.662: 21.8913% ( 52) 00:08:43.740 6805.662 - 6856.074: 22.2168% ( 40) 00:08:43.740 6856.074 - 6906.486: 22.4528% ( 29) 00:08:43.740 6906.486 - 6956.898: 22.6481% ( 24) 00:08:43.740 6956.898 - 7007.311: 22.7783% ( 16) 00:08:43.740 7007.311 - 7057.723: 22.8923% ( 14) 00:08:43.740 7057.723 - 7108.135: 22.9818% ( 11) 00:08:43.740 7108.135 - 7158.548: 23.0713% ( 11) 00:08:43.740 7158.548 - 7208.960: 23.1689% ( 12) 00:08:43.740 7208.960 - 7259.372: 23.2585% ( 11) 00:08:43.740 7259.372 - 7309.785: 23.3236% ( 8) 00:08:43.740 7309.785 - 7360.197: 23.3805% ( 7) 00:08:43.740 7360.197 - 7410.609: 23.4375% ( 7) 00:08:43.740 7410.609 - 7461.022: 23.5107% ( 9) 00:08:43.740 7461.022 - 7511.434: 23.5677% ( 7) 00:08:43.740 7511.434 - 7561.846: 23.6410% ( 9) 00:08:43.740 7561.846 - 7612.258: 23.6979% ( 7) 00:08:43.740 7612.258 - 7662.671: 23.7549% ( 7) 00:08:43.740 7662.671 - 7713.083: 23.8037% ( 6) 00:08:43.740 7713.083 - 7763.495: 23.8688% ( 8) 00:08:43.740 7763.495 - 7813.908: 23.9583% ( 11) 00:08:43.740 7813.908 - 7864.320: 24.0560% ( 12) 00:08:43.740 7864.320 - 7914.732: 24.1374% ( 10) 00:08:43.740 7914.732 - 7965.145: 24.2188% ( 10) 00:08:43.740 7965.145 - 8015.557: 24.3164% ( 12) 00:08:43.740 8015.557 - 8065.969: 24.3978% ( 10) 00:08:43.740 8065.969 - 8116.382: 24.4873% ( 11) 00:08:43.740 8116.382 - 8166.794: 24.5687% ( 10) 00:08:43.740 8166.794 - 8217.206: 24.6663% ( 12) 00:08:43.740 8217.206 - 8267.618: 24.7314% ( 8) 00:08:43.740 8267.618 - 8318.031: 24.7965% ( 8) 00:08:43.740 8318.031 - 8368.443: 24.8535% ( 7) 00:08:43.740 8368.443 - 8418.855: 24.9186% ( 8) 00:08:43.740 8418.855 - 8469.268: 24.9756% ( 7) 00:08:43.740 8469.268 - 8519.680: 25.0407% ( 8) 00:08:43.740 8519.680 - 8570.092: 25.0977% ( 7) 00:08:43.740 8570.092 - 8620.505: 25.1546% ( 7) 00:08:43.740 8620.505 - 8670.917: 25.2197% ( 8) 00:08:43.740 8670.917 - 8721.329: 25.2848% ( 8) 00:08:43.740 8721.329 - 8771.742: 25.3499% ( 8) 00:08:43.740 8771.742 - 8822.154: 25.4150% ( 8) 00:08:43.740 8822.154 - 8872.566: 25.4801% ( 8) 00:08:43.740 8872.566 - 8922.978: 25.5778% ( 12) 00:08:43.740 8922.978 - 8973.391: 25.6673% ( 11) 00:08:43.740 8973.391 - 9023.803: 25.8057% ( 17) 00:08:43.740 9023.803 - 9074.215: 25.9766% ( 21) 00:08:43.740 9074.215 - 9124.628: 26.0986% ( 15) 00:08:43.740 9124.628 - 9175.040: 26.2858% ( 23) 00:08:43.740 9175.040 - 9225.452: 26.4323% ( 18) 00:08:43.740 9225.452 - 9275.865: 26.6276% ( 24) 00:08:43.740 9275.865 - 9326.277: 26.8066% ( 22) 00:08:43.740 9326.277 - 9376.689: 27.0264% ( 27) 00:08:43.740 9376.689 - 9427.102: 27.2054% ( 22) 00:08:43.740 9427.102 - 9477.514: 27.3356% ( 16) 00:08:43.740 9477.514 - 9527.926: 27.5553% ( 27) 00:08:43.740 9527.926 - 9578.338: 27.8809% ( 40) 00:08:43.740 9578.338 - 9628.751: 28.1169% ( 29) 00:08:43.740 9628.751 - 9679.163: 28.4180% ( 37) 00:08:43.740 9679.163 - 9729.575: 28.8493% ( 53) 00:08:43.740 9729.575 - 9779.988: 29.2725% ( 52) 00:08:43.740 9779.988 - 9830.400: 29.7933% ( 64) 00:08:43.740 9830.400 - 9880.812: 30.4362% ( 79) 00:08:43.740 9880.812 - 9931.225: 31.0954% ( 81) 00:08:43.740 9931.225 - 9981.637: 31.6976% ( 74) 00:08:43.740 9981.637 - 10032.049: 32.3405% ( 79) 00:08:43.740 10032.049 - 10082.462: 33.0566% ( 88) 00:08:43.740 10082.462 - 10132.874: 33.7077% ( 80) 00:08:43.740 10132.874 - 10183.286: 34.3750% ( 82) 00:08:43.740 10183.286 - 10233.698: 35.3190% ( 116) 00:08:43.740 10233.698 - 10284.111: 36.1003% ( 96) 00:08:43.740 10284.111 - 10334.523: 36.9059% ( 99) 00:08:43.740 10334.523 - 10384.935: 37.7767% ( 107) 00:08:43.740 10384.935 - 10435.348: 38.6963% ( 113) 00:08:43.740 10435.348 - 10485.760: 39.6810% ( 121) 00:08:43.740 10485.760 - 10536.172: 40.6901% ( 124) 00:08:43.740 10536.172 - 10586.585: 41.5690% ( 108) 00:08:43.740 10586.585 - 10636.997: 42.6921% ( 138) 00:08:43.740 10636.997 - 10687.409: 43.7663% ( 132) 00:08:43.740 10687.409 - 10737.822: 44.7510% ( 121) 00:08:43.740 10737.822 - 10788.234: 45.8496% ( 135) 00:08:43.740 10788.234 - 10838.646: 46.9157% ( 131) 00:08:43.740 10838.646 - 10889.058: 48.2422% ( 163) 00:08:43.740 10889.058 - 10939.471: 49.5036% ( 155) 00:08:43.740 10939.471 - 10989.883: 50.6673% ( 143) 00:08:43.740 10989.883 - 11040.295: 51.8962% ( 151) 00:08:43.740 11040.295 - 11090.708: 53.0680% ( 144) 00:08:43.740 11090.708 - 11141.120: 54.3620% ( 159) 00:08:43.740 11141.120 - 11191.532: 55.6885% ( 163) 00:08:43.740 11191.532 - 11241.945: 56.9987% ( 161) 00:08:43.740 11241.945 - 11292.357: 58.1950% ( 147) 00:08:43.740 11292.357 - 11342.769: 59.4727% ( 157) 00:08:43.740 11342.769 - 11393.182: 60.6201% ( 141) 00:08:43.740 11393.182 - 11443.594: 61.8245% ( 148) 00:08:43.740 11443.594 - 11494.006: 63.0697% ( 153) 00:08:43.740 11494.006 - 11544.418: 64.1764% ( 136) 00:08:43.740 11544.418 - 11594.831: 65.3320% ( 142) 00:08:43.740 11594.831 - 11645.243: 66.4632% ( 139) 00:08:43.740 11645.243 - 11695.655: 67.5537% ( 134) 00:08:43.740 11695.655 - 11746.068: 68.6361% ( 133) 00:08:43.740 11746.068 - 11796.480: 69.6370% ( 123) 00:08:43.740 11796.480 - 11846.892: 70.6217% ( 121) 00:08:43.740 11846.892 - 11897.305: 71.6553% ( 127) 00:08:43.740 11897.305 - 11947.717: 72.5342% ( 108) 00:08:43.740 11947.717 - 11998.129: 73.4375% ( 111) 00:08:43.740 11998.129 - 12048.542: 74.3083% ( 107) 00:08:43.740 12048.542 - 12098.954: 75.2197% ( 112) 00:08:43.740 12098.954 - 12149.366: 76.1963% ( 120) 00:08:43.740 12149.366 - 12199.778: 77.1240% ( 114) 00:08:43.740 12199.778 - 12250.191: 77.9867% ( 106) 00:08:43.740 12250.191 - 12300.603: 78.8737% ( 109) 00:08:43.740 12300.603 - 12351.015: 79.6956% ( 101) 00:08:43.740 12351.015 - 12401.428: 80.4525% ( 93) 00:08:43.740 12401.428 - 12451.840: 81.1686% ( 88) 00:08:43.740 12451.840 - 12502.252: 81.7952% ( 77) 00:08:43.740 12502.252 - 12552.665: 82.3730% ( 71) 00:08:43.740 12552.665 - 12603.077: 82.9590% ( 72) 00:08:43.740 12603.077 - 12653.489: 83.5205% ( 69) 00:08:43.740 12653.489 - 12703.902: 84.0495% ( 65) 00:08:43.740 12703.902 - 12754.314: 84.5378% ( 60) 00:08:43.740 12754.314 - 12804.726: 85.0505% ( 63) 00:08:43.740 12804.726 - 12855.138: 85.5225% ( 58) 00:08:43.740 12855.138 - 12905.551: 86.0026% ( 59) 00:08:43.740 12905.551 - 13006.375: 86.7432% ( 91) 00:08:43.740 13006.375 - 13107.200: 87.4512% ( 87) 00:08:43.740 13107.200 - 13208.025: 88.1185% ( 82) 00:08:43.740 13208.025 - 13308.849: 88.7533% ( 78) 00:08:43.740 13308.849 - 13409.674: 89.4043% ( 80) 00:08:43.740 13409.674 - 13510.498: 89.9251% ( 64) 00:08:43.740 13510.498 - 13611.323: 90.4541% ( 65) 00:08:43.740 13611.323 - 13712.148: 90.8854% ( 53) 00:08:43.740 13712.148 - 13812.972: 91.4062% ( 64) 00:08:43.741 13812.972 - 13913.797: 92.0492% ( 79) 00:08:43.741 13913.797 - 14014.622: 92.5212% ( 58) 00:08:43.741 14014.622 - 14115.446: 92.8955% ( 46) 00:08:43.741 14115.446 - 14216.271: 93.2536% ( 44) 00:08:43.741 14216.271 - 14317.095: 93.6117% ( 44) 00:08:43.741 14317.095 - 14417.920: 93.9697% ( 44) 00:08:43.741 14417.920 - 14518.745: 94.3359% ( 45) 00:08:43.741 14518.745 - 14619.569: 94.7347% ( 49) 00:08:43.741 14619.569 - 14720.394: 95.0846% ( 43) 00:08:43.741 14720.394 - 14821.218: 95.4102% ( 40) 00:08:43.741 14821.218 - 14922.043: 95.7275% ( 39) 00:08:43.741 14922.043 - 15022.868: 95.9961% ( 33) 00:08:43.741 15022.868 - 15123.692: 96.2972% ( 37) 00:08:43.741 15123.692 - 15224.517: 96.5820% ( 35) 00:08:43.741 15224.517 - 15325.342: 96.8913% ( 38) 00:08:43.741 15325.342 - 15426.166: 97.1598% ( 33) 00:08:43.741 15426.166 - 15526.991: 97.4365% ( 34) 00:08:43.741 15526.991 - 15627.815: 97.6562% ( 27) 00:08:43.741 15627.815 - 15728.640: 97.7865% ( 16) 00:08:43.741 15728.640 - 15829.465: 97.9411% ( 19) 00:08:43.741 15829.465 - 15930.289: 98.0632% ( 15) 00:08:43.741 15930.289 - 16031.114: 98.2015% ( 17) 00:08:43.741 16031.114 - 16131.938: 98.3073% ( 13) 00:08:43.741 16131.938 - 16232.763: 98.3968% ( 11) 00:08:43.741 16232.763 - 16333.588: 98.4375% ( 5) 00:08:43.741 16333.588 - 16434.412: 98.4782% ( 5) 00:08:43.741 16434.412 - 16535.237: 98.5107% ( 4) 00:08:43.741 16535.237 - 16636.062: 98.5514% ( 5) 00:08:43.741 16636.062 - 16736.886: 98.5921% ( 5) 00:08:43.741 16736.886 - 16837.711: 98.6328% ( 5) 00:08:43.741 16837.711 - 16938.535: 98.6735% ( 5) 00:08:43.741 16938.535 - 17039.360: 98.7142% ( 5) 00:08:43.741 17039.360 - 17140.185: 98.7467% ( 4) 00:08:43.741 17140.185 - 17241.009: 98.7874% ( 5) 00:08:43.741 17241.009 - 17341.834: 98.8281% ( 5) 00:08:43.741 17341.834 - 17442.658: 98.8688% ( 5) 00:08:43.741 17442.658 - 17543.483: 98.9095% ( 5) 00:08:43.741 17543.483 - 17644.308: 98.9502% ( 5) 00:08:43.741 17644.308 - 17745.132: 98.9583% ( 1) 00:08:43.741 25811.102 - 26012.751: 99.0397% ( 10) 00:08:43.741 26012.751 - 26214.400: 99.1862% ( 18) 00:08:43.741 26214.400 - 26416.049: 99.2920% ( 13) 00:08:43.741 26416.049 - 26617.698: 99.4141% ( 15) 00:08:43.741 26617.698 - 26819.348: 99.5361% ( 15) 00:08:43.741 26819.348 - 27020.997: 99.6582% ( 15) 00:08:43.741 27020.997 - 27222.646: 99.7884% ( 16) 00:08:43.741 27222.646 - 27424.295: 99.9105% ( 15) 00:08:43.741 27424.295 - 27625.945: 100.0000% ( 11) 00:08:43.741 00:08:43.741 04:04:45 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:45.125 Initializing NVMe Controllers 00:08:45.125 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:45.125 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:45.125 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:45.125 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:45.125 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:08:45.125 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:08:45.125 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:08:45.125 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:08:45.125 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:08:45.125 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:08:45.125 Initialization complete. Launching workers. 00:08:45.125 ======================================================== 00:08:45.125 Latency(us) 00:08:45.125 Device Information : IOPS MiB/s Average min max 00:08:45.125 PCIE (0000:00:06.0) NSID 1 from core 0: 11760.53 137.82 10879.49 4825.02 27078.58 00:08:45.125 PCIE (0000:00:07.0) NSID 1 from core 0: 11760.53 137.82 10889.15 5555.32 28083.66 00:08:45.125 PCIE (0000:00:09.0) NSID 1 from core 0: 11760.53 137.82 10883.15 5170.66 31097.32 00:08:45.125 PCIE (0000:00:08.0) NSID 1 from core 0: 11760.53 137.82 10876.87 5054.69 31978.80 00:08:45.125 PCIE (0000:00:08.0) NSID 2 from core 0: 11760.53 137.82 10870.66 5177.44 32912.14 00:08:45.125 PCIE (0000:00:08.0) NSID 3 from core 0: 11888.37 139.32 10747.64 5333.56 20297.54 00:08:45.125 ======================================================== 00:08:45.125 Total : 70691.04 828.41 10857.63 4825.02 32912.14 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5671.385us 00:08:45.125 10.00000% : 6956.898us 00:08:45.125 25.00000% : 8771.742us 00:08:45.125 50.00000% : 10889.058us 00:08:45.125 75.00000% : 12653.489us 00:08:45.125 90.00000% : 14216.271us 00:08:45.125 95.00000% : 15426.166us 00:08:45.125 98.00000% : 17745.132us 00:08:45.125 99.00000% : 25004.505us 00:08:45.125 99.50000% : 26012.751us 00:08:45.125 99.90000% : 27020.997us 00:08:45.125 99.99000% : 27020.997us 00:08:45.125 99.99900% : 27222.646us 00:08:45.125 99.99990% : 27222.646us 00:08:45.125 99.99999% : 27222.646us 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5847.828us 00:08:45.125 10.00000% : 6956.898us 00:08:45.125 25.00000% : 8721.329us 00:08:45.125 50.00000% : 11040.295us 00:08:45.125 75.00000% : 12703.902us 00:08:45.125 90.00000% : 14115.446us 00:08:45.125 95.00000% : 15123.692us 00:08:45.125 98.00000% : 17946.782us 00:08:45.125 99.00000% : 26214.400us 00:08:45.125 99.50000% : 27222.646us 00:08:45.125 99.90000% : 28029.243us 00:08:45.125 99.99000% : 28230.892us 00:08:45.125 99.99900% : 28230.892us 00:08:45.125 99.99990% : 28230.892us 00:08:45.125 99.99999% : 28230.892us 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5797.415us 00:08:45.125 10.00000% : 7007.311us 00:08:45.125 25.00000% : 8721.329us 00:08:45.125 50.00000% : 10788.234us 00:08:45.125 75.00000% : 12603.077us 00:08:45.125 90.00000% : 13913.797us 00:08:45.125 95.00000% : 15123.692us 00:08:45.125 98.00000% : 17644.308us 00:08:45.125 99.00000% : 29239.138us 00:08:45.125 99.50000% : 30247.385us 00:08:45.125 99.90000% : 31053.982us 00:08:45.125 99.99000% : 31255.631us 00:08:45.125 99.99900% : 31255.631us 00:08:45.125 99.99990% : 31255.631us 00:08:45.125 99.99999% : 31255.631us 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5772.209us 00:08:45.125 10.00000% : 6956.898us 00:08:45.125 25.00000% : 8721.329us 00:08:45.125 50.00000% : 10536.172us 00:08:45.125 75.00000% : 12653.489us 00:08:45.125 90.00000% : 14014.622us 00:08:45.125 95.00000% : 15325.342us 00:08:45.125 98.00000% : 18249.255us 00:08:45.125 99.00000% : 30247.385us 00:08:45.125 99.50000% : 31053.982us 00:08:45.125 99.90000% : 31860.578us 00:08:45.125 99.99000% : 32062.228us 00:08:45.125 99.99900% : 32062.228us 00:08:45.125 99.99990% : 32062.228us 00:08:45.125 99.99999% : 32062.228us 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5847.828us 00:08:45.125 10.00000% : 6856.074us 00:08:45.125 25.00000% : 8872.566us 00:08:45.125 50.00000% : 10586.585us 00:08:45.125 75.00000% : 12653.489us 00:08:45.125 90.00000% : 13812.972us 00:08:45.125 95.00000% : 15325.342us 00:08:45.125 98.00000% : 18450.905us 00:08:45.125 99.00000% : 31053.982us 00:08:45.125 99.50000% : 32062.228us 00:08:45.125 99.90000% : 32868.825us 00:08:45.125 99.99000% : 33070.474us 00:08:45.125 99.99900% : 33070.474us 00:08:45.125 99.99990% : 33070.474us 00:08:45.125 99.99999% : 33070.474us 00:08:45.125 00:08:45.125 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:45.125 ================================================================================= 00:08:45.125 1.00000% : 5822.622us 00:08:45.125 10.00000% : 6856.074us 00:08:45.125 25.00000% : 8771.742us 00:08:45.125 50.00000% : 10788.234us 00:08:45.125 75.00000% : 12603.077us 00:08:45.125 90.00000% : 13812.972us 00:08:45.125 95.00000% : 15426.166us 00:08:45.125 98.00000% : 17946.782us 00:08:45.125 99.00000% : 18955.028us 00:08:45.125 99.50000% : 19459.151us 00:08:45.125 99.90000% : 20064.098us 00:08:45.125 99.99000% : 20265.748us 00:08:45.125 99.99900% : 20366.572us 00:08:45.125 99.99990% : 20366.572us 00:08:45.125 99.99999% : 20366.572us 00:08:45.125 00:08:45.125 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:45.125 ============================================================================== 00:08:45.125 Range in us Cumulative IO count 00:08:45.125 4814.375 - 4839.582: 0.0085% ( 1) 00:08:45.125 4839.582 - 4864.788: 0.0170% ( 1) 00:08:45.125 4864.788 - 4889.994: 0.0340% ( 2) 00:08:45.125 4889.994 - 4915.200: 0.0510% ( 2) 00:08:45.125 4940.406 - 4965.612: 0.0594% ( 1) 00:08:45.125 4965.612 - 4990.818: 0.0679% ( 1) 00:08:45.125 4990.818 - 5016.025: 0.0764% ( 1) 00:08:45.125 5016.025 - 5041.231: 0.0849% ( 1) 00:08:45.125 5041.231 - 5066.437: 0.1019% ( 2) 00:08:45.125 5091.643 - 5116.849: 0.1189% ( 2) 00:08:45.125 5116.849 - 5142.055: 0.1274% ( 1) 00:08:45.125 5142.055 - 5167.262: 0.1359% ( 1) 00:08:45.125 5167.262 - 5192.468: 0.1529% ( 2) 00:08:45.125 5192.468 - 5217.674: 0.1698% ( 2) 00:08:45.125 5217.674 - 5242.880: 0.1953% ( 3) 00:08:45.125 5242.880 - 5268.086: 0.2038% ( 1) 00:08:45.125 5268.086 - 5293.292: 0.2463% ( 5) 00:08:45.125 5293.292 - 5318.498: 0.3057% ( 7) 00:08:45.125 5318.498 - 5343.705: 0.3312% ( 3) 00:08:45.125 5343.705 - 5368.911: 0.3821% ( 6) 00:08:45.125 5368.911 - 5394.117: 0.4331% ( 6) 00:08:45.125 5394.117 - 5419.323: 0.4416% ( 1) 00:08:45.125 5419.323 - 5444.529: 0.4755% ( 4) 00:08:45.125 5444.529 - 5469.735: 0.5180% ( 5) 00:08:45.125 5469.735 - 5494.942: 0.5605% ( 5) 00:08:45.125 5494.942 - 5520.148: 0.5774% ( 2) 00:08:45.125 5520.148 - 5545.354: 0.6539% ( 9) 00:08:45.125 5545.354 - 5570.560: 0.7218% ( 8) 00:08:45.125 5570.560 - 5595.766: 0.8237% ( 12) 00:08:45.125 5595.766 - 5620.972: 0.9171% ( 11) 00:08:45.125 5620.972 - 5646.178: 0.9935% ( 9) 00:08:45.125 5646.178 - 5671.385: 1.0615% ( 8) 00:08:45.125 5671.385 - 5696.591: 1.1209% ( 7) 00:08:45.125 5696.591 - 5721.797: 1.1719% ( 6) 00:08:45.125 5721.797 - 5747.003: 1.2653% ( 11) 00:08:45.125 5747.003 - 5772.209: 1.3077% ( 5) 00:08:45.125 5772.209 - 5797.415: 1.4096% ( 12) 00:08:45.125 5797.415 - 5822.622: 1.5455% ( 16) 00:08:45.125 5822.622 - 5847.828: 1.6644% ( 14) 00:08:45.125 5847.828 - 5873.034: 1.7663% ( 12) 00:08:45.125 5873.034 - 5898.240: 1.9022% ( 16) 00:08:45.125 5898.240 - 5923.446: 2.0126% ( 13) 00:08:45.125 5923.446 - 5948.652: 2.1569% ( 17) 00:08:45.125 5948.652 - 5973.858: 2.3183% ( 19) 00:08:45.125 5973.858 - 5999.065: 2.5136% ( 23) 00:08:45.125 5999.065 - 6024.271: 2.6749% ( 19) 00:08:45.125 6024.271 - 6049.477: 2.8702% ( 23) 00:08:45.125 6049.477 - 6074.683: 3.0995% ( 27) 00:08:45.125 6074.683 - 6099.889: 3.2694% ( 20) 00:08:45.125 6099.889 - 6125.095: 3.4647% ( 23) 00:08:45.125 6125.095 - 6150.302: 3.6515% ( 22) 00:08:45.125 6150.302 - 6175.508: 3.7364% ( 10) 00:08:45.125 6175.508 - 6200.714: 3.8213% ( 10) 00:08:45.125 6200.714 - 6225.920: 3.9657% ( 17) 00:08:45.125 6225.920 - 6251.126: 4.1101% ( 17) 00:08:45.125 6251.126 - 6276.332: 4.2035% ( 11) 00:08:45.125 6276.332 - 6301.538: 4.5007% ( 35) 00:08:45.125 6301.538 - 6326.745: 4.8488% ( 41) 00:08:45.125 6326.745 - 6351.951: 5.1121% ( 31) 00:08:45.125 6351.951 - 6377.157: 5.2904% ( 21) 00:08:45.125 6377.157 - 6402.363: 5.5367% ( 29) 00:08:45.126 6402.363 - 6427.569: 5.7999% ( 31) 00:08:45.126 6427.569 - 6452.775: 6.0632% ( 31) 00:08:45.126 6452.775 - 6503.188: 6.5812% ( 61) 00:08:45.126 6503.188 - 6553.600: 6.9378% ( 42) 00:08:45.126 6553.600 - 6604.012: 7.3200% ( 45) 00:08:45.126 6604.012 - 6654.425: 7.6172% ( 35) 00:08:45.126 6654.425 - 6704.837: 8.0588% ( 52) 00:08:45.126 6704.837 - 6755.249: 8.3475% ( 34) 00:08:45.126 6755.249 - 6805.662: 8.7806% ( 51) 00:08:45.126 6805.662 - 6856.074: 9.1712% ( 46) 00:08:45.126 6856.074 - 6906.486: 9.7062% ( 63) 00:08:45.126 6906.486 - 6956.898: 10.5214% ( 96) 00:08:45.126 6956.898 - 7007.311: 11.0988% ( 68) 00:08:45.126 7007.311 - 7057.723: 11.4046% ( 36) 00:08:45.126 7057.723 - 7108.135: 11.7103% ( 36) 00:08:45.126 7108.135 - 7158.548: 11.9650% ( 30) 00:08:45.126 7158.548 - 7208.960: 12.3387% ( 44) 00:08:45.126 7208.960 - 7259.372: 12.7123% ( 44) 00:08:45.126 7259.372 - 7309.785: 13.0435% ( 39) 00:08:45.126 7309.785 - 7360.197: 13.3152% ( 32) 00:08:45.126 7360.197 - 7410.609: 13.4596% ( 17) 00:08:45.126 7410.609 - 7461.022: 13.6379% ( 21) 00:08:45.126 7461.022 - 7511.434: 13.8077% ( 20) 00:08:45.126 7511.434 - 7561.846: 14.0370% ( 27) 00:08:45.126 7561.846 - 7612.258: 14.2323% ( 23) 00:08:45.126 7612.258 - 7662.671: 14.3852% ( 18) 00:08:45.126 7662.671 - 7713.083: 14.7334% ( 41) 00:08:45.126 7713.083 - 7763.495: 15.1410% ( 48) 00:08:45.126 7763.495 - 7813.908: 15.8288% ( 81) 00:08:45.126 7813.908 - 7864.320: 16.8478% ( 120) 00:08:45.126 7864.320 - 7914.732: 17.6036% ( 89) 00:08:45.126 7914.732 - 7965.145: 18.2745% ( 79) 00:08:45.126 7965.145 - 8015.557: 18.7925% ( 61) 00:08:45.126 8015.557 - 8065.969: 19.2595% ( 55) 00:08:45.126 8065.969 - 8116.382: 19.5482% ( 34) 00:08:45.126 8116.382 - 8166.794: 19.9983% ( 53) 00:08:45.126 8166.794 - 8217.206: 20.4059% ( 48) 00:08:45.126 8217.206 - 8267.618: 20.8475% ( 52) 00:08:45.126 8267.618 - 8318.031: 21.2041% ( 42) 00:08:45.126 8318.031 - 8368.443: 21.6372% ( 51) 00:08:45.126 8368.443 - 8418.855: 22.1977% ( 66) 00:08:45.126 8418.855 - 8469.268: 22.7582% ( 66) 00:08:45.126 8469.268 - 8519.680: 23.1827% ( 50) 00:08:45.126 8519.680 - 8570.092: 23.5734% ( 46) 00:08:45.126 8570.092 - 8620.505: 24.0489% ( 56) 00:08:45.126 8620.505 - 8670.917: 24.3801% ( 39) 00:08:45.126 8670.917 - 8721.329: 24.9490% ( 67) 00:08:45.126 8721.329 - 8771.742: 25.3482% ( 47) 00:08:45.126 8771.742 - 8822.154: 25.7558% ( 48) 00:08:45.126 8822.154 - 8872.566: 26.2313% ( 56) 00:08:45.126 8872.566 - 8922.978: 26.7238% ( 58) 00:08:45.126 8922.978 - 8973.391: 27.1994% ( 56) 00:08:45.126 8973.391 - 9023.803: 27.7683% ( 67) 00:08:45.126 9023.803 - 9074.215: 28.4901% ( 85) 00:08:45.126 9074.215 - 9124.628: 29.0676% ( 68) 00:08:45.126 9124.628 - 9175.040: 29.6875% ( 73) 00:08:45.126 9175.040 - 9225.452: 30.2565% ( 67) 00:08:45.126 9225.452 - 9275.865: 30.8594% ( 71) 00:08:45.126 9275.865 - 9326.277: 31.4623% ( 71) 00:08:45.126 9326.277 - 9376.689: 32.0312% ( 67) 00:08:45.126 9376.689 - 9427.102: 32.5238% ( 58) 00:08:45.126 9427.102 - 9477.514: 33.1267% ( 71) 00:08:45.126 9477.514 - 9527.926: 33.6787% ( 65) 00:08:45.126 9527.926 - 9578.338: 34.2391% ( 66) 00:08:45.126 9578.338 - 9628.751: 34.7317% ( 58) 00:08:45.126 9628.751 - 9679.163: 35.3006% ( 67) 00:08:45.126 9679.163 - 9729.575: 35.8696% ( 67) 00:08:45.126 9729.575 - 9779.988: 36.4640% ( 70) 00:08:45.126 9779.988 - 9830.400: 36.8886% ( 50) 00:08:45.126 9830.400 - 9880.812: 37.3896% ( 59) 00:08:45.126 9880.812 - 9931.225: 37.9586% ( 67) 00:08:45.126 9931.225 - 9981.637: 38.4171% ( 54) 00:08:45.126 9981.637 - 10032.049: 38.8332% ( 49) 00:08:45.126 10032.049 - 10082.462: 39.3257% ( 58) 00:08:45.126 10082.462 - 10132.874: 39.7758% ( 53) 00:08:45.126 10132.874 - 10183.286: 40.2174% ( 52) 00:08:45.126 10183.286 - 10233.698: 40.6929% ( 56) 00:08:45.126 10233.698 - 10284.111: 41.3298% ( 75) 00:08:45.126 10284.111 - 10334.523: 42.0007% ( 79) 00:08:45.126 10334.523 - 10384.935: 42.6121% ( 72) 00:08:45.126 10384.935 - 10435.348: 43.4613% ( 100) 00:08:45.126 10435.348 - 10485.760: 44.0982% ( 75) 00:08:45.126 10485.760 - 10536.172: 44.9898% ( 105) 00:08:45.126 10536.172 - 10586.585: 45.8050% ( 96) 00:08:45.126 10586.585 - 10636.997: 46.7306% ( 109) 00:08:45.126 10636.997 - 10687.409: 47.5798% ( 100) 00:08:45.126 10687.409 - 10737.822: 48.1658% ( 69) 00:08:45.126 10737.822 - 10788.234: 48.7347% ( 67) 00:08:45.126 10788.234 - 10838.646: 49.3461% ( 72) 00:08:45.126 10838.646 - 10889.058: 50.1868% ( 99) 00:08:45.126 10889.058 - 10939.471: 50.9001% ( 84) 00:08:45.126 10939.471 - 10989.883: 51.4691% ( 67) 00:08:45.126 10989.883 - 11040.295: 52.2418% ( 91) 00:08:45.126 11040.295 - 11090.708: 52.8193% ( 68) 00:08:45.126 11090.708 - 11141.120: 53.5411% ( 85) 00:08:45.126 11141.120 - 11191.532: 54.2884% ( 88) 00:08:45.126 11191.532 - 11241.945: 54.9847% ( 82) 00:08:45.126 11241.945 - 11292.357: 55.6471% ( 78) 00:08:45.126 11292.357 - 11342.769: 56.3519% ( 83) 00:08:45.126 11342.769 - 11393.182: 57.0992% ( 88) 00:08:45.126 11393.182 - 11443.594: 57.8295% ( 86) 00:08:45.126 11443.594 - 11494.006: 58.5938% ( 90) 00:08:45.126 11494.006 - 11544.418: 59.2391% ( 76) 00:08:45.126 11544.418 - 11594.831: 60.0628% ( 97) 00:08:45.126 11594.831 - 11645.243: 60.6658% ( 71) 00:08:45.126 11645.243 - 11695.655: 61.3961% ( 86) 00:08:45.126 11695.655 - 11746.068: 62.0584% ( 78) 00:08:45.126 11746.068 - 11796.480: 62.9246% ( 102) 00:08:45.126 11796.480 - 11846.892: 63.6634% ( 87) 00:08:45.126 11846.892 - 11897.305: 64.3937% ( 86) 00:08:45.126 11897.305 - 11947.717: 65.3363% ( 111) 00:08:45.126 11947.717 - 11998.129: 66.1600% ( 97) 00:08:45.126 11998.129 - 12048.542: 66.8563% ( 82) 00:08:45.126 12048.542 - 12098.954: 67.6376% ( 92) 00:08:45.126 12098.954 - 12149.366: 68.5632% ( 109) 00:08:45.126 12149.366 - 12199.778: 69.3359% ( 91) 00:08:45.126 12199.778 - 12250.191: 70.2615% ( 109) 00:08:45.126 12250.191 - 12300.603: 70.9579% ( 82) 00:08:45.126 12300.603 - 12351.015: 71.7052% ( 88) 00:08:45.126 12351.015 - 12401.428: 72.3166% ( 72) 00:08:45.126 12401.428 - 12451.840: 72.8601% ( 64) 00:08:45.126 12451.840 - 12502.252: 73.5224% ( 78) 00:08:45.126 12502.252 - 12552.665: 74.0914% ( 67) 00:08:45.126 12552.665 - 12603.077: 74.6264% ( 63) 00:08:45.126 12603.077 - 12653.489: 75.4161% ( 93) 00:08:45.126 12653.489 - 12703.902: 75.8662% ( 53) 00:08:45.126 12703.902 - 12754.314: 76.6644% ( 94) 00:08:45.126 12754.314 - 12804.726: 77.3013% ( 75) 00:08:45.126 12804.726 - 12855.138: 77.9467% ( 76) 00:08:45.126 12855.138 - 12905.551: 78.5751% ( 74) 00:08:45.126 12905.551 - 13006.375: 79.9762% ( 165) 00:08:45.126 13006.375 - 13107.200: 81.1226% ( 135) 00:08:45.126 13107.200 - 13208.025: 82.3115% ( 140) 00:08:45.126 13208.025 - 13308.849: 83.3305% ( 120) 00:08:45.126 13308.849 - 13409.674: 84.2052% ( 103) 00:08:45.126 13409.674 - 13510.498: 85.0713% ( 102) 00:08:45.126 13510.498 - 13611.323: 85.9545% ( 104) 00:08:45.126 13611.323 - 13712.148: 86.7697% ( 96) 00:08:45.126 13712.148 - 13812.972: 87.7123% ( 111) 00:08:45.126 13812.972 - 13913.797: 88.4001% ( 81) 00:08:45.126 13913.797 - 14014.622: 89.0965% ( 82) 00:08:45.126 14014.622 - 14115.446: 89.7334% ( 75) 00:08:45.126 14115.446 - 14216.271: 90.3278% ( 70) 00:08:45.126 14216.271 - 14317.095: 91.0496% ( 85) 00:08:45.126 14317.095 - 14417.920: 91.5931% ( 64) 00:08:45.126 14417.920 - 14518.745: 92.0771% ( 57) 00:08:45.126 14518.745 - 14619.569: 92.4847% ( 48) 00:08:45.126 14619.569 - 14720.394: 92.9772% ( 58) 00:08:45.126 14720.394 - 14821.218: 93.3509% ( 44) 00:08:45.126 14821.218 - 14922.043: 93.6906% ( 40) 00:08:45.126 14922.043 - 15022.868: 93.9963% ( 36) 00:08:45.126 15022.868 - 15123.692: 94.2340% ( 28) 00:08:45.126 15123.692 - 15224.517: 94.6501% ( 49) 00:08:45.126 15224.517 - 15325.342: 94.9558% ( 36) 00:08:45.126 15325.342 - 15426.166: 95.1766% ( 26) 00:08:45.126 15426.166 - 15526.991: 95.4569% ( 33) 00:08:45.126 15526.991 - 15627.815: 95.7371% ( 33) 00:08:45.126 15627.815 - 15728.640: 95.9494% ( 25) 00:08:45.126 15728.640 - 15829.465: 96.1957% ( 29) 00:08:45.126 15829.465 - 15930.289: 96.4079% ( 25) 00:08:45.126 15930.289 - 16031.114: 96.5863% ( 21) 00:08:45.126 16031.114 - 16131.938: 96.8580% ( 32) 00:08:45.126 16131.938 - 16232.763: 96.9939% ( 16) 00:08:45.126 16232.763 - 16333.588: 97.1637% ( 20) 00:08:45.126 16333.588 - 16434.412: 97.3166% ( 18) 00:08:45.126 16434.412 - 16535.237: 97.4185% ( 12) 00:08:45.126 16535.237 - 16636.062: 97.4864% ( 8) 00:08:45.126 16636.062 - 16736.886: 97.5543% ( 8) 00:08:45.126 16736.886 - 16837.711: 97.5968% ( 5) 00:08:45.126 16837.711 - 16938.535: 97.6562% ( 7) 00:08:45.126 16938.535 - 17039.360: 97.6732% ( 2) 00:08:45.126 17039.360 - 17140.185: 97.7157% ( 5) 00:08:45.126 17140.185 - 17241.009: 97.7327% ( 2) 00:08:45.126 17241.009 - 17341.834: 97.7836% ( 6) 00:08:45.126 17341.834 - 17442.658: 97.8770% ( 11) 00:08:45.126 17442.658 - 17543.483: 97.9535% ( 9) 00:08:45.126 17543.483 - 17644.308: 97.9874% ( 4) 00:08:45.126 17644.308 - 17745.132: 98.0299% ( 5) 00:08:45.126 17745.132 - 17845.957: 98.0724% ( 5) 00:08:45.126 17845.957 - 17946.782: 98.1233% ( 6) 00:08:45.126 17946.782 - 18047.606: 98.1658% ( 5) 00:08:45.126 18047.606 - 18148.431: 98.2252% ( 7) 00:08:45.126 18148.431 - 18249.255: 98.2422% ( 2) 00:08:45.126 18249.255 - 18350.080: 98.2507% ( 1) 00:08:45.126 18350.080 - 18450.905: 98.3781% ( 15) 00:08:45.126 18450.905 - 18551.729: 98.3865% ( 1) 00:08:45.127 18551.729 - 18652.554: 98.4120% ( 3) 00:08:45.127 18652.554 - 18753.378: 98.4375% ( 3) 00:08:45.127 18753.378 - 18854.203: 98.5054% ( 8) 00:08:45.127 18854.203 - 18955.028: 98.5394% ( 4) 00:08:45.127 18955.028 - 19055.852: 98.5904% ( 6) 00:08:45.127 19055.852 - 19156.677: 98.6413% ( 6) 00:08:45.127 19156.677 - 19257.502: 98.6583% ( 2) 00:08:45.127 19257.502 - 19358.326: 98.7007% ( 5) 00:08:45.127 19358.326 - 19459.151: 98.7517% ( 6) 00:08:45.127 19459.151 - 19559.975: 98.7772% ( 3) 00:08:45.127 19559.975 - 19660.800: 98.8281% ( 6) 00:08:45.127 19660.800 - 19761.625: 98.8536% ( 3) 00:08:45.127 19761.625 - 19862.449: 98.8961% ( 5) 00:08:45.127 19862.449 - 19963.274: 98.9130% ( 2) 00:08:45.127 24802.855 - 24903.680: 98.9300% ( 2) 00:08:45.127 24903.680 - 25004.505: 99.1253% ( 23) 00:08:45.127 25004.505 - 25105.329: 99.1508% ( 3) 00:08:45.127 25105.329 - 25206.154: 99.1763% ( 3) 00:08:45.127 25206.154 - 25306.978: 99.1933% ( 2) 00:08:45.127 25306.978 - 25407.803: 99.2442% ( 6) 00:08:45.127 25407.803 - 25508.628: 99.2867% ( 5) 00:08:45.127 25508.628 - 25609.452: 99.3291% ( 5) 00:08:45.127 25609.452 - 25710.277: 99.3461% ( 2) 00:08:45.127 25710.277 - 25811.102: 99.4226% ( 9) 00:08:45.127 25811.102 - 26012.751: 99.5075% ( 10) 00:08:45.127 26012.751 - 26214.400: 99.6009% ( 11) 00:08:45.127 26214.400 - 26416.049: 99.7028% ( 12) 00:08:45.127 26416.049 - 26617.698: 99.7877% ( 10) 00:08:45.127 26617.698 - 26819.348: 99.8811% ( 11) 00:08:45.127 26819.348 - 27020.997: 99.9915% ( 13) 00:08:45.127 27020.997 - 27222.646: 100.0000% ( 1) 00:08:45.127 00:08:45.127 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:45.127 ============================================================================== 00:08:45.127 Range in us Cumulative IO count 00:08:45.127 5545.354 - 5570.560: 0.0170% ( 2) 00:08:45.127 5570.560 - 5595.766: 0.0340% ( 2) 00:08:45.127 5595.766 - 5620.972: 0.0510% ( 2) 00:08:45.127 5620.972 - 5646.178: 0.0679% ( 2) 00:08:45.127 5646.178 - 5671.385: 0.1019% ( 4) 00:08:45.127 5671.385 - 5696.591: 0.1359% ( 4) 00:08:45.127 5696.591 - 5721.797: 0.1698% ( 4) 00:08:45.127 5721.797 - 5747.003: 0.2632% ( 11) 00:08:45.127 5747.003 - 5772.209: 0.3482% ( 10) 00:08:45.127 5772.209 - 5797.415: 0.4416% ( 11) 00:08:45.127 5797.415 - 5822.622: 0.9766% ( 63) 00:08:45.127 5822.622 - 5847.828: 1.1294% ( 18) 00:08:45.127 5847.828 - 5873.034: 1.7663% ( 75) 00:08:45.127 5873.034 - 5898.240: 1.9107% ( 17) 00:08:45.127 5898.240 - 5923.446: 1.9956% ( 10) 00:08:45.127 5923.446 - 5948.652: 2.0805% ( 10) 00:08:45.127 5948.652 - 5973.858: 2.1399% ( 7) 00:08:45.127 5973.858 - 5999.065: 2.1909% ( 6) 00:08:45.127 5999.065 - 6024.271: 2.2673% ( 9) 00:08:45.127 6024.271 - 6049.477: 2.3692% ( 12) 00:08:45.127 6049.477 - 6074.683: 2.5391% ( 20) 00:08:45.127 6074.683 - 6099.889: 2.8108% ( 32) 00:08:45.127 6099.889 - 6125.095: 3.2524% ( 52) 00:08:45.127 6125.095 - 6150.302: 3.7024% ( 53) 00:08:45.127 6150.302 - 6175.508: 4.1865% ( 57) 00:08:45.127 6175.508 - 6200.714: 4.2544% ( 8) 00:08:45.127 6200.714 - 6225.920: 4.2969% ( 5) 00:08:45.127 6225.920 - 6251.126: 4.3733% ( 9) 00:08:45.127 6251.126 - 6276.332: 4.4327% ( 7) 00:08:45.127 6276.332 - 6301.538: 4.5601% ( 15) 00:08:45.127 6301.538 - 6326.745: 4.6450% ( 10) 00:08:45.127 6326.745 - 6351.951: 4.8149% ( 20) 00:08:45.127 6351.951 - 6377.157: 5.3584% ( 64) 00:08:45.127 6377.157 - 6402.363: 5.4857% ( 15) 00:08:45.127 6402.363 - 6427.569: 5.6471% ( 19) 00:08:45.127 6427.569 - 6452.775: 5.9783% ( 39) 00:08:45.127 6452.775 - 6503.188: 7.3539% ( 162) 00:08:45.127 6503.188 - 6553.600: 7.5917% ( 28) 00:08:45.127 6553.600 - 6604.012: 7.7785% ( 22) 00:08:45.127 6604.012 - 6654.425: 7.9229% ( 17) 00:08:45.127 6654.425 - 6704.837: 8.0757% ( 18) 00:08:45.127 6704.837 - 6755.249: 8.2371% ( 19) 00:08:45.127 6755.249 - 6805.662: 8.5598% ( 38) 00:08:45.127 6805.662 - 6856.074: 8.7891% ( 27) 00:08:45.127 6856.074 - 6906.486: 9.6807% ( 105) 00:08:45.127 6906.486 - 6956.898: 10.1308% ( 53) 00:08:45.127 6956.898 - 7007.311: 10.5129% ( 45) 00:08:45.127 7007.311 - 7057.723: 11.6253% ( 131) 00:08:45.127 7057.723 - 7108.135: 11.8631% ( 28) 00:08:45.127 7108.135 - 7158.548: 12.1009% ( 28) 00:08:45.127 7158.548 - 7208.960: 12.4151% ( 37) 00:08:45.127 7208.960 - 7259.372: 13.2643% ( 100) 00:08:45.127 7259.372 - 7309.785: 13.6039% ( 40) 00:08:45.127 7309.785 - 7360.197: 13.7058% ( 12) 00:08:45.127 7360.197 - 7410.609: 13.7908% ( 10) 00:08:45.127 7410.609 - 7461.022: 13.8757% ( 10) 00:08:45.127 7461.022 - 7511.434: 13.9861% ( 13) 00:08:45.127 7511.434 - 7561.846: 14.1474% ( 19) 00:08:45.127 7561.846 - 7612.258: 14.3342% ( 22) 00:08:45.127 7612.258 - 7662.671: 14.7164% ( 45) 00:08:45.127 7662.671 - 7713.083: 15.0391% ( 38) 00:08:45.127 7713.083 - 7763.495: 15.5571% ( 61) 00:08:45.127 7763.495 - 7813.908: 15.8543% ( 35) 00:08:45.127 7813.908 - 7864.320: 16.1515% ( 35) 00:08:45.127 7864.320 - 7914.732: 16.5336% ( 45) 00:08:45.127 7914.732 - 7965.145: 17.1281% ( 70) 00:08:45.127 7965.145 - 8015.557: 17.4677% ( 40) 00:08:45.127 8015.557 - 8065.969: 17.8244% ( 42) 00:08:45.127 8065.969 - 8116.382: 18.2575% ( 51) 00:08:45.127 8116.382 - 8166.794: 18.7415% ( 57) 00:08:45.127 8166.794 - 8217.206: 19.4209% ( 80) 00:08:45.127 8217.206 - 8267.618: 20.5503% ( 133) 00:08:45.127 8267.618 - 8318.031: 20.9918% ( 52) 00:08:45.127 8318.031 - 8368.443: 21.3910% ( 47) 00:08:45.127 8368.443 - 8418.855: 21.9005% ( 60) 00:08:45.127 8418.855 - 8469.268: 22.5883% ( 81) 00:08:45.127 8469.268 - 8519.680: 23.1912% ( 71) 00:08:45.127 8519.680 - 8570.092: 23.7602% ( 67) 00:08:45.127 8570.092 - 8620.505: 24.2952% ( 63) 00:08:45.127 8620.505 - 8670.917: 24.8217% ( 62) 00:08:45.127 8670.917 - 8721.329: 25.4161% ( 70) 00:08:45.127 8721.329 - 8771.742: 25.9426% ( 62) 00:08:45.127 8771.742 - 8822.154: 26.4181% ( 56) 00:08:45.127 8822.154 - 8872.566: 26.9786% ( 66) 00:08:45.127 8872.566 - 8922.978: 27.7768% ( 94) 00:08:45.127 8922.978 - 8973.391: 28.4052% ( 74) 00:08:45.127 8973.391 - 9023.803: 28.8723% ( 55) 00:08:45.127 9023.803 - 9074.215: 29.3393% ( 55) 00:08:45.127 9074.215 - 9124.628: 29.9847% ( 76) 00:08:45.127 9124.628 - 9175.040: 30.6216% ( 75) 00:08:45.127 9175.040 - 9225.452: 31.1651% ( 64) 00:08:45.127 9225.452 - 9275.865: 31.7595% ( 70) 00:08:45.127 9275.865 - 9326.277: 32.3285% ( 67) 00:08:45.127 9326.277 - 9376.689: 32.9399% ( 72) 00:08:45.127 9376.689 - 9427.102: 33.4494% ( 60) 00:08:45.127 9427.102 - 9477.514: 33.9334% ( 57) 00:08:45.127 9477.514 - 9527.926: 34.3920% ( 54) 00:08:45.127 9527.926 - 9578.338: 34.8930% ( 59) 00:08:45.127 9578.338 - 9628.751: 35.3516% ( 54) 00:08:45.127 9628.751 - 9679.163: 35.7931% ( 52) 00:08:45.127 9679.163 - 9729.575: 36.2942% ( 59) 00:08:45.127 9729.575 - 9779.988: 36.9056% ( 72) 00:08:45.127 9779.988 - 9830.400: 37.3726% ( 55) 00:08:45.127 9830.400 - 9880.812: 37.8482% ( 56) 00:08:45.127 9880.812 - 9931.225: 38.3322% ( 57) 00:08:45.127 9931.225 - 9981.637: 38.6634% ( 39) 00:08:45.127 9981.637 - 10032.049: 38.9946% ( 39) 00:08:45.127 10032.049 - 10082.462: 39.3342% ( 40) 00:08:45.127 10082.462 - 10132.874: 39.7334% ( 47) 00:08:45.127 10132.874 - 10183.286: 40.1664% ( 51) 00:08:45.127 10183.286 - 10233.698: 40.6590% ( 58) 00:08:45.127 10233.698 - 10284.111: 41.1685% ( 60) 00:08:45.127 10284.111 - 10334.523: 41.6355% ( 55) 00:08:45.127 10334.523 - 10384.935: 42.2215% ( 69) 00:08:45.127 10384.935 - 10435.348: 42.8074% ( 69) 00:08:45.127 10435.348 - 10485.760: 43.4443% ( 75) 00:08:45.127 10485.760 - 10536.172: 44.1151% ( 79) 00:08:45.127 10536.172 - 10586.585: 44.8370% ( 85) 00:08:45.127 10586.585 - 10636.997: 45.4908% ( 77) 00:08:45.127 10636.997 - 10687.409: 46.0683% ( 68) 00:08:45.127 10687.409 - 10737.822: 46.6372% ( 67) 00:08:45.127 10737.822 - 10788.234: 47.1977% ( 66) 00:08:45.127 10788.234 - 10838.646: 47.7497% ( 65) 00:08:45.127 10838.646 - 10889.058: 48.3781% ( 74) 00:08:45.127 10889.058 - 10939.471: 49.2188% ( 99) 00:08:45.127 10939.471 - 10989.883: 49.8896% ( 79) 00:08:45.127 10989.883 - 11040.295: 50.5350% ( 76) 00:08:45.127 11040.295 - 11090.708: 51.2058% ( 79) 00:08:45.127 11090.708 - 11141.120: 51.8512% ( 76) 00:08:45.127 11141.120 - 11191.532: 52.5306% ( 80) 00:08:45.127 11191.532 - 11241.945: 53.2524% ( 85) 00:08:45.127 11241.945 - 11292.357: 54.0421% ( 93) 00:08:45.127 11292.357 - 11342.769: 54.8828% ( 99) 00:08:45.127 11342.769 - 11393.182: 55.8849% ( 118) 00:08:45.127 11393.182 - 11443.594: 56.7850% ( 106) 00:08:45.127 11443.594 - 11494.006: 57.9908% ( 142) 00:08:45.127 11494.006 - 11544.418: 58.9419% ( 112) 00:08:45.127 11544.418 - 11594.831: 59.9100% ( 114) 00:08:45.127 11594.831 - 11645.243: 60.8781% ( 114) 00:08:45.127 11645.243 - 11695.655: 61.6678% ( 93) 00:08:45.127 11695.655 - 11746.068: 62.4066% ( 87) 00:08:45.127 11746.068 - 11796.480: 63.0774% ( 79) 00:08:45.127 11796.480 - 11846.892: 63.7228% ( 76) 00:08:45.127 11846.892 - 11897.305: 64.3937% ( 79) 00:08:45.127 11897.305 - 11947.717: 65.0985% ( 83) 00:08:45.127 11947.717 - 11998.129: 65.7524% ( 77) 00:08:45.127 11998.129 - 12048.542: 66.4062% ( 77) 00:08:45.127 12048.542 - 12098.954: 67.2300% ( 97) 00:08:45.127 12098.954 - 12149.366: 68.1726% ( 111) 00:08:45.127 12149.366 - 12199.778: 68.9453% ( 91) 00:08:45.127 12199.778 - 12250.191: 69.7181% ( 91) 00:08:45.127 12250.191 - 12300.603: 70.4569% ( 87) 00:08:45.128 12300.603 - 12351.015: 71.2211% ( 90) 00:08:45.128 12351.015 - 12401.428: 71.8750% ( 77) 00:08:45.128 12401.428 - 12451.840: 72.5798% ( 83) 00:08:45.128 12451.840 - 12502.252: 73.2082% ( 74) 00:08:45.128 12502.252 - 12552.665: 73.7347% ( 62) 00:08:45.128 12552.665 - 12603.077: 74.3207% ( 69) 00:08:45.128 12603.077 - 12653.489: 74.9660% ( 76) 00:08:45.128 12653.489 - 12703.902: 75.6624% ( 82) 00:08:45.128 12703.902 - 12754.314: 76.2653% ( 71) 00:08:45.128 12754.314 - 12804.726: 76.8852% ( 73) 00:08:45.128 12804.726 - 12855.138: 77.6070% ( 85) 00:08:45.128 12855.138 - 12905.551: 78.2863% ( 80) 00:08:45.128 12905.551 - 13006.375: 79.6875% ( 165) 00:08:45.128 13006.375 - 13107.200: 81.0377% ( 159) 00:08:45.128 13107.200 - 13208.025: 82.3539% ( 155) 00:08:45.128 13208.025 - 13308.849: 83.4579% ( 130) 00:08:45.128 13308.849 - 13409.674: 84.3410% ( 104) 00:08:45.128 13409.674 - 13510.498: 85.3685% ( 121) 00:08:45.128 13510.498 - 13611.323: 86.2517% ( 104) 00:08:45.128 13611.323 - 13712.148: 87.0839% ( 98) 00:08:45.128 13712.148 - 13812.972: 87.9501% ( 102) 00:08:45.128 13812.972 - 13913.797: 88.7143% ( 90) 00:08:45.128 13913.797 - 14014.622: 89.4446% ( 86) 00:08:45.128 14014.622 - 14115.446: 90.1240% ( 80) 00:08:45.128 14115.446 - 14216.271: 90.7014% ( 68) 00:08:45.128 14216.271 - 14317.095: 91.2959% ( 70) 00:08:45.128 14317.095 - 14417.920: 92.2300% ( 110) 00:08:45.128 14417.920 - 14518.745: 92.7395% ( 60) 00:08:45.128 14518.745 - 14619.569: 93.1556% ( 49) 00:08:45.128 14619.569 - 14720.394: 93.5037% ( 41) 00:08:45.128 14720.394 - 14821.218: 93.8434% ( 40) 00:08:45.128 14821.218 - 14922.043: 94.2086% ( 43) 00:08:45.128 14922.043 - 15022.868: 94.6756% ( 55) 00:08:45.128 15022.868 - 15123.692: 95.1342% ( 54) 00:08:45.128 15123.692 - 15224.517: 95.4569% ( 38) 00:08:45.128 15224.517 - 15325.342: 95.7711% ( 37) 00:08:45.128 15325.342 - 15426.166: 96.0938% ( 38) 00:08:45.128 15426.166 - 15526.991: 96.3400% ( 29) 00:08:45.128 15526.991 - 15627.815: 96.6712% ( 39) 00:08:45.128 15627.815 - 15728.640: 96.8580% ( 22) 00:08:45.128 15728.640 - 15829.465: 96.9684% ( 13) 00:08:45.128 15829.465 - 15930.289: 97.0533% ( 10) 00:08:45.128 15930.289 - 16031.114: 97.1382% ( 10) 00:08:45.128 16031.114 - 16131.938: 97.2232% ( 10) 00:08:45.128 16131.938 - 16232.763: 97.3251% ( 12) 00:08:45.128 16232.763 - 16333.588: 97.4270% ( 12) 00:08:45.128 16333.588 - 16434.412: 97.5034% ( 9) 00:08:45.128 16434.412 - 16535.237: 97.5459% ( 5) 00:08:45.128 16535.237 - 16636.062: 97.5968% ( 6) 00:08:45.128 16636.062 - 16736.886: 97.6393% ( 5) 00:08:45.128 16736.886 - 16837.711: 97.6817% ( 5) 00:08:45.128 16837.711 - 16938.535: 97.7327% ( 6) 00:08:45.128 16938.535 - 17039.360: 97.7751% ( 5) 00:08:45.128 17039.360 - 17140.185: 97.8176% ( 5) 00:08:45.128 17140.185 - 17241.009: 97.8261% ( 1) 00:08:45.128 17543.483 - 17644.308: 97.8770% ( 6) 00:08:45.128 17644.308 - 17745.132: 97.9365% ( 7) 00:08:45.128 17745.132 - 17845.957: 97.9959% ( 7) 00:08:45.128 17845.957 - 17946.782: 98.0469% ( 6) 00:08:45.128 17946.782 - 18047.606: 98.1318% ( 10) 00:08:45.128 18047.606 - 18148.431: 98.1912% ( 7) 00:08:45.128 18148.431 - 18249.255: 98.2592% ( 8) 00:08:45.128 18249.255 - 18350.080: 98.3865% ( 15) 00:08:45.128 18350.080 - 18450.905: 98.5649% ( 21) 00:08:45.128 18450.905 - 18551.729: 98.6158% ( 6) 00:08:45.128 18551.729 - 18652.554: 98.6753% ( 7) 00:08:45.128 18652.554 - 18753.378: 98.7177% ( 5) 00:08:45.128 18753.378 - 18854.203: 98.7602% ( 5) 00:08:45.128 18854.203 - 18955.028: 98.8026% ( 5) 00:08:45.128 18955.028 - 19055.852: 98.8536% ( 6) 00:08:45.128 19055.852 - 19156.677: 98.8961% ( 5) 00:08:45.128 19156.677 - 19257.502: 98.9130% ( 2) 00:08:45.128 25811.102 - 26012.751: 98.9215% ( 1) 00:08:45.128 26012.751 - 26214.400: 99.0234% ( 12) 00:08:45.128 26214.400 - 26416.049: 99.1338% ( 13) 00:08:45.128 26416.049 - 26617.698: 99.2357% ( 12) 00:08:45.128 26617.698 - 26819.348: 99.3461% ( 13) 00:08:45.128 26819.348 - 27020.997: 99.4565% ( 13) 00:08:45.128 27020.997 - 27222.646: 99.5414% ( 10) 00:08:45.128 27222.646 - 27424.295: 99.6433% ( 12) 00:08:45.128 27424.295 - 27625.945: 99.7452% ( 12) 00:08:45.128 27625.945 - 27827.594: 99.8556% ( 13) 00:08:45.128 27827.594 - 28029.243: 99.9660% ( 13) 00:08:45.128 28029.243 - 28230.892: 100.0000% ( 4) 00:08:45.128 00:08:45.128 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:45.128 ============================================================================== 00:08:45.128 Range in us Cumulative IO count 00:08:45.128 5167.262 - 5192.468: 0.0085% ( 1) 00:08:45.128 5343.705 - 5368.911: 0.0170% ( 1) 00:08:45.128 5368.911 - 5394.117: 0.0340% ( 2) 00:08:45.128 5394.117 - 5419.323: 0.0764% ( 5) 00:08:45.128 5419.323 - 5444.529: 0.1104% ( 4) 00:08:45.128 5444.529 - 5469.735: 0.1359% ( 3) 00:08:45.128 5469.735 - 5494.942: 0.1613% ( 3) 00:08:45.128 5494.942 - 5520.148: 0.1953% ( 4) 00:08:45.128 5520.148 - 5545.354: 0.2208% ( 3) 00:08:45.128 5545.354 - 5570.560: 0.2463% ( 3) 00:08:45.128 5570.560 - 5595.766: 0.3057% ( 7) 00:08:45.128 5595.766 - 5620.972: 0.3482% ( 5) 00:08:45.128 5620.972 - 5646.178: 0.4161% ( 8) 00:08:45.128 5646.178 - 5671.385: 0.4925% ( 9) 00:08:45.128 5671.385 - 5696.591: 0.5690% ( 9) 00:08:45.128 5696.591 - 5721.797: 0.6369% ( 8) 00:08:45.128 5721.797 - 5747.003: 0.7388% ( 12) 00:08:45.128 5747.003 - 5772.209: 0.8237% ( 10) 00:08:45.128 5772.209 - 5797.415: 1.1804% ( 42) 00:08:45.128 5797.415 - 5822.622: 1.2653% ( 10) 00:08:45.128 5822.622 - 5847.828: 1.3502% ( 10) 00:08:45.128 5847.828 - 5873.034: 1.4521% ( 12) 00:08:45.128 5873.034 - 5898.240: 1.5540% ( 12) 00:08:45.128 5898.240 - 5923.446: 1.7069% ( 18) 00:08:45.128 5923.446 - 5948.652: 2.3353% ( 74) 00:08:45.128 5948.652 - 5973.858: 2.5476% ( 25) 00:08:45.128 5973.858 - 5999.065: 2.6834% ( 16) 00:08:45.128 5999.065 - 6024.271: 2.7853% ( 12) 00:08:45.128 6024.271 - 6049.477: 2.8618% ( 9) 00:08:45.128 6049.477 - 6074.683: 2.9297% ( 8) 00:08:45.128 6074.683 - 6099.889: 3.0061% ( 9) 00:08:45.128 6099.889 - 6125.095: 3.1080% ( 12) 00:08:45.128 6125.095 - 6150.302: 3.1844% ( 9) 00:08:45.128 6150.302 - 6175.508: 3.3203% ( 16) 00:08:45.128 6175.508 - 6200.714: 3.4307% ( 13) 00:08:45.128 6200.714 - 6225.920: 3.5411% ( 13) 00:08:45.128 6225.920 - 6251.126: 3.6685% ( 15) 00:08:45.128 6251.126 - 6276.332: 3.7789% ( 13) 00:08:45.128 6276.332 - 6301.538: 3.9062% ( 15) 00:08:45.128 6301.538 - 6326.745: 4.0676% ( 19) 00:08:45.128 6326.745 - 6351.951: 4.2629% ( 23) 00:08:45.128 6351.951 - 6377.157: 4.5516% ( 34) 00:08:45.128 6377.157 - 6402.363: 4.6960% ( 17) 00:08:45.128 6402.363 - 6427.569: 4.7894% ( 11) 00:08:45.128 6427.569 - 6452.775: 4.8998% ( 13) 00:08:45.128 6452.775 - 6503.188: 5.1461% ( 29) 00:08:45.128 6503.188 - 6553.600: 5.9358% ( 93) 00:08:45.128 6553.600 - 6604.012: 6.6576% ( 85) 00:08:45.128 6604.012 - 6654.425: 7.0397% ( 45) 00:08:45.128 6654.425 - 6704.837: 7.5832% ( 64) 00:08:45.128 6704.837 - 6755.249: 7.8889% ( 36) 00:08:45.128 6755.249 - 6805.662: 8.2286% ( 40) 00:08:45.128 6805.662 - 6856.074: 8.5173% ( 34) 00:08:45.128 6856.074 - 6906.486: 8.9079% ( 46) 00:08:45.128 6906.486 - 6956.898: 9.5958% ( 81) 00:08:45.128 6956.898 - 7007.311: 10.1987% ( 71) 00:08:45.128 7007.311 - 7057.723: 10.4789% ( 33) 00:08:45.128 7057.723 - 7108.135: 10.6488% ( 20) 00:08:45.128 7108.135 - 7158.548: 10.7762% ( 15) 00:08:45.128 7158.548 - 7208.960: 10.9035% ( 15) 00:08:45.128 7208.960 - 7259.372: 11.0224% ( 14) 00:08:45.128 7259.372 - 7309.785: 11.1753% ( 18) 00:08:45.128 7309.785 - 7360.197: 11.2942% ( 14) 00:08:45.128 7360.197 - 7410.609: 11.4555% ( 19) 00:08:45.128 7410.609 - 7461.022: 11.6338% ( 21) 00:08:45.128 7461.022 - 7511.434: 11.8716% ( 28) 00:08:45.128 7511.434 - 7561.846: 12.1179% ( 29) 00:08:45.128 7561.846 - 7612.258: 12.3132% ( 23) 00:08:45.128 7612.258 - 7662.671: 12.5000% ( 22) 00:08:45.128 7662.671 - 7713.083: 12.7378% ( 28) 00:08:45.128 7713.083 - 7763.495: 13.3407% ( 71) 00:08:45.128 7763.495 - 7813.908: 13.6209% ( 33) 00:08:45.128 7813.908 - 7864.320: 14.1474% ( 62) 00:08:45.128 7864.320 - 7914.732: 14.5635% ( 49) 00:08:45.128 7914.732 - 7965.145: 14.9117% ( 41) 00:08:45.128 7965.145 - 8015.557: 15.3787% ( 55) 00:08:45.128 8015.557 - 8065.969: 15.9477% ( 67) 00:08:45.128 8065.969 - 8116.382: 16.7459% ( 94) 00:08:45.128 8116.382 - 8166.794: 17.6036% ( 101) 00:08:45.128 8166.794 - 8217.206: 18.1556% ( 65) 00:08:45.128 8217.206 - 8267.618: 18.6396% ( 57) 00:08:45.128 8267.618 - 8318.031: 19.2935% ( 77) 00:08:45.128 8318.031 - 8368.443: 19.8794% ( 69) 00:08:45.128 8368.443 - 8418.855: 20.4908% ( 72) 00:08:45.128 8418.855 - 8469.268: 21.1957% ( 83) 00:08:45.128 8469.268 - 8519.680: 21.9260% ( 86) 00:08:45.128 8519.680 - 8570.092: 22.6138% ( 81) 00:08:45.128 8570.092 - 8620.505: 23.5309% ( 108) 00:08:45.128 8620.505 - 8670.917: 24.7283% ( 141) 00:08:45.128 8670.917 - 8721.329: 25.4076% ( 80) 00:08:45.128 8721.329 - 8771.742: 26.0020% ( 70) 00:08:45.128 8771.742 - 8822.154: 26.4861% ( 57) 00:08:45.128 8822.154 - 8872.566: 27.0126% ( 62) 00:08:45.128 8872.566 - 8922.978: 27.5306% ( 61) 00:08:45.128 8922.978 - 8973.391: 28.0910% ( 66) 00:08:45.128 8973.391 - 9023.803: 28.8128% ( 85) 00:08:45.128 9023.803 - 9074.215: 29.5007% ( 81) 00:08:45.128 9074.215 - 9124.628: 30.0696% ( 67) 00:08:45.128 9124.628 - 9175.040: 30.6471% ( 68) 00:08:45.128 9175.040 - 9225.452: 31.2585% ( 72) 00:08:45.129 9225.452 - 9275.865: 31.8274% ( 67) 00:08:45.129 9275.865 - 9326.277: 32.5747% ( 88) 00:08:45.129 9326.277 - 9376.689: 33.2541% ( 80) 00:08:45.129 9376.689 - 9427.102: 33.8315% ( 68) 00:08:45.129 9427.102 - 9477.514: 34.3495% ( 61) 00:08:45.129 9477.514 - 9527.926: 34.8251% ( 56) 00:08:45.129 9527.926 - 9578.338: 35.2327% ( 48) 00:08:45.129 9578.338 - 9628.751: 35.5384% ( 36) 00:08:45.129 9628.751 - 9679.163: 36.0224% ( 57) 00:08:45.129 9679.163 - 9729.575: 36.5404% ( 61) 00:08:45.129 9729.575 - 9779.988: 37.2452% ( 83) 00:08:45.129 9779.988 - 9830.400: 37.8227% ( 68) 00:08:45.129 9830.400 - 9880.812: 38.3916% ( 67) 00:08:45.129 9880.812 - 9931.225: 39.1474% ( 89) 00:08:45.129 9931.225 - 9981.637: 40.1579% ( 119) 00:08:45.129 9981.637 - 10032.049: 40.6675% ( 60) 00:08:45.129 10032.049 - 10082.462: 41.2024% ( 63) 00:08:45.129 10082.462 - 10132.874: 41.6525% ( 53) 00:08:45.129 10132.874 - 10183.286: 42.1026% ( 53) 00:08:45.129 10183.286 - 10233.698: 42.5951% ( 58) 00:08:45.129 10233.698 - 10284.111: 43.1556% ( 66) 00:08:45.129 10284.111 - 10334.523: 43.6566% ( 59) 00:08:45.129 10334.523 - 10384.935: 44.1236% ( 55) 00:08:45.129 10384.935 - 10435.348: 44.8454% ( 85) 00:08:45.129 10435.348 - 10485.760: 45.5418% ( 82) 00:08:45.129 10485.760 - 10536.172: 46.1447% ( 71) 00:08:45.129 10536.172 - 10586.585: 46.8410% ( 82) 00:08:45.129 10586.585 - 10636.997: 47.5374% ( 82) 00:08:45.129 10636.997 - 10687.409: 48.1912% ( 77) 00:08:45.129 10687.409 - 10737.822: 48.9980% ( 95) 00:08:45.129 10737.822 - 10788.234: 50.0510% ( 124) 00:08:45.129 10788.234 - 10838.646: 50.8577% ( 95) 00:08:45.129 10838.646 - 10889.058: 51.5031% ( 76) 00:08:45.129 10889.058 - 10939.471: 52.1399% ( 75) 00:08:45.129 10939.471 - 10989.883: 52.7768% ( 75) 00:08:45.129 10989.883 - 11040.295: 53.4052% ( 74) 00:08:45.129 11040.295 - 11090.708: 54.0082% ( 71) 00:08:45.129 11090.708 - 11141.120: 54.8404% ( 98) 00:08:45.129 11141.120 - 11191.532: 55.6641% ( 97) 00:08:45.129 11191.532 - 11241.945: 56.4283% ( 90) 00:08:45.129 11241.945 - 11292.357: 57.2520% ( 97) 00:08:45.129 11292.357 - 11342.769: 57.9144% ( 78) 00:08:45.129 11342.769 - 11393.182: 58.5173% ( 71) 00:08:45.129 11393.182 - 11443.594: 59.1712% ( 77) 00:08:45.129 11443.594 - 11494.006: 59.8675% ( 82) 00:08:45.129 11494.006 - 11544.418: 60.7252% ( 101) 00:08:45.129 11544.418 - 11594.831: 61.2942% ( 67) 00:08:45.129 11594.831 - 11645.243: 61.9735% ( 80) 00:08:45.129 11645.243 - 11695.655: 62.7293% ( 89) 00:08:45.129 11695.655 - 11746.068: 63.4086% ( 80) 00:08:45.129 11746.068 - 11796.480: 64.0795% ( 79) 00:08:45.129 11796.480 - 11846.892: 64.7673% ( 81) 00:08:45.129 11846.892 - 11897.305: 65.4806% ( 84) 00:08:45.129 11897.305 - 11947.717: 66.1855% ( 83) 00:08:45.129 11947.717 - 11998.129: 66.9073% ( 85) 00:08:45.129 11998.129 - 12048.542: 67.5526% ( 76) 00:08:45.129 12048.542 - 12098.954: 68.2065% ( 77) 00:08:45.129 12098.954 - 12149.366: 68.9029% ( 82) 00:08:45.129 12149.366 - 12199.778: 69.5567% ( 77) 00:08:45.129 12199.778 - 12250.191: 70.2531% ( 82) 00:08:45.129 12250.191 - 12300.603: 70.8390% ( 69) 00:08:45.129 12300.603 - 12351.015: 71.5693% ( 86) 00:08:45.129 12351.015 - 12401.428: 72.4779% ( 107) 00:08:45.129 12401.428 - 12451.840: 73.3101% ( 98) 00:08:45.129 12451.840 - 12502.252: 74.0914% ( 92) 00:08:45.129 12502.252 - 12552.665: 74.7113% ( 73) 00:08:45.129 12552.665 - 12603.077: 75.3906% ( 80) 00:08:45.129 12603.077 - 12653.489: 76.0530% ( 78) 00:08:45.129 12653.489 - 12703.902: 76.8597% ( 95) 00:08:45.129 12703.902 - 12754.314: 77.5645% ( 83) 00:08:45.129 12754.314 - 12804.726: 78.2354% ( 79) 00:08:45.129 12804.726 - 12855.138: 78.7619% ( 62) 00:08:45.129 12855.138 - 12905.551: 79.3478% ( 69) 00:08:45.129 12905.551 - 13006.375: 80.6980% ( 159) 00:08:45.129 13006.375 - 13107.200: 82.2690% ( 185) 00:08:45.129 13107.200 - 13208.025: 83.5598% ( 152) 00:08:45.129 13208.025 - 13308.849: 84.7656% ( 142) 00:08:45.129 13308.849 - 13409.674: 85.9545% ( 140) 00:08:45.129 13409.674 - 13510.498: 87.0329% ( 127) 00:08:45.129 13510.498 - 13611.323: 87.9755% ( 111) 00:08:45.129 13611.323 - 13712.148: 88.8332% ( 101) 00:08:45.129 13712.148 - 13812.972: 89.9541% ( 132) 00:08:45.129 13812.972 - 13913.797: 90.5995% ( 76) 00:08:45.129 13913.797 - 14014.622: 91.0921% ( 58) 00:08:45.129 14014.622 - 14115.446: 91.5846% ( 58) 00:08:45.129 14115.446 - 14216.271: 92.0601% ( 56) 00:08:45.129 14216.271 - 14317.095: 92.6461% ( 69) 00:08:45.129 14317.095 - 14417.920: 93.1386% ( 58) 00:08:45.129 14417.920 - 14518.745: 93.5037% ( 43) 00:08:45.129 14518.745 - 14619.569: 93.7670% ( 31) 00:08:45.129 14619.569 - 14720.394: 94.0897% ( 38) 00:08:45.129 14720.394 - 14821.218: 94.3954% ( 36) 00:08:45.129 14821.218 - 14922.043: 94.7011% ( 36) 00:08:45.129 14922.043 - 15022.868: 94.9134% ( 25) 00:08:45.129 15022.868 - 15123.692: 95.1681% ( 30) 00:08:45.129 15123.692 - 15224.517: 95.3719% ( 24) 00:08:45.129 15224.517 - 15325.342: 95.5503% ( 21) 00:08:45.129 15325.342 - 15426.166: 95.7116% ( 19) 00:08:45.129 15426.166 - 15526.991: 95.8560% ( 17) 00:08:45.129 15526.991 - 15627.815: 96.0003% ( 17) 00:08:45.129 15627.815 - 15728.640: 96.1192% ( 14) 00:08:45.129 15728.640 - 15829.465: 96.2296% ( 13) 00:08:45.129 15829.465 - 15930.289: 96.3485% ( 14) 00:08:45.129 15930.289 - 16031.114: 96.4249% ( 9) 00:08:45.129 16031.114 - 16131.938: 96.5014% ( 9) 00:08:45.129 16131.938 - 16232.763: 96.6033% ( 12) 00:08:45.129 16232.763 - 16333.588: 96.7052% ( 12) 00:08:45.129 16333.588 - 16434.412: 96.8071% ( 12) 00:08:45.129 16434.412 - 16535.237: 96.8835% ( 9) 00:08:45.129 16535.237 - 16636.062: 96.9344% ( 6) 00:08:45.129 16636.062 - 16736.886: 96.9854% ( 6) 00:08:45.129 16736.886 - 16837.711: 97.0448% ( 7) 00:08:45.129 16837.711 - 16938.535: 97.1213% ( 9) 00:08:45.129 16938.535 - 17039.360: 97.2317% ( 13) 00:08:45.129 17039.360 - 17140.185: 97.3930% ( 19) 00:08:45.129 17140.185 - 17241.009: 97.7072% ( 37) 00:08:45.129 17241.009 - 17341.834: 97.7836% ( 9) 00:08:45.129 17341.834 - 17442.658: 97.8855% ( 12) 00:08:45.129 17442.658 - 17543.483: 97.9874% ( 12) 00:08:45.129 17543.483 - 17644.308: 98.0978% ( 13) 00:08:45.129 17644.308 - 17745.132: 98.2422% ( 17) 00:08:45.129 17745.132 - 17845.957: 98.4205% ( 21) 00:08:45.129 17845.957 - 17946.782: 98.5054% ( 10) 00:08:45.129 17946.782 - 18047.606: 98.5904% ( 10) 00:08:45.129 18047.606 - 18148.431: 98.6583% ( 8) 00:08:45.129 18148.431 - 18249.255: 98.7092% ( 6) 00:08:45.129 18249.255 - 18350.080: 98.7602% ( 6) 00:08:45.129 18350.080 - 18450.905: 98.8026% ( 5) 00:08:45.129 18450.905 - 18551.729: 98.8536% ( 6) 00:08:45.129 18551.729 - 18652.554: 98.8876% ( 4) 00:08:45.129 18652.554 - 18753.378: 98.9130% ( 3) 00:08:45.129 28835.840 - 29037.489: 98.9555% ( 5) 00:08:45.129 29037.489 - 29239.138: 99.0489% ( 11) 00:08:45.129 29239.138 - 29440.788: 99.1593% ( 13) 00:08:45.129 29440.788 - 29642.437: 99.2612% ( 12) 00:08:45.129 29642.437 - 29844.086: 99.3631% ( 12) 00:08:45.129 29844.086 - 30045.735: 99.4650% ( 12) 00:08:45.129 30045.735 - 30247.385: 99.5584% ( 11) 00:08:45.129 30247.385 - 30449.034: 99.6603% ( 12) 00:08:45.129 30449.034 - 30650.683: 99.7622% ( 12) 00:08:45.129 30650.683 - 30852.332: 99.8641% ( 12) 00:08:45.129 30852.332 - 31053.982: 99.9745% ( 13) 00:08:45.129 31053.982 - 31255.631: 100.0000% ( 3) 00:08:45.129 00:08:45.129 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:45.129 ============================================================================== 00:08:45.129 Range in us Cumulative IO count 00:08:45.129 5041.231 - 5066.437: 0.0085% ( 1) 00:08:45.129 5293.292 - 5318.498: 0.0170% ( 1) 00:08:45.129 5343.705 - 5368.911: 0.0340% ( 2) 00:08:45.129 5368.911 - 5394.117: 0.0679% ( 4) 00:08:45.129 5394.117 - 5419.323: 0.0764% ( 1) 00:08:45.129 5419.323 - 5444.529: 0.1019% ( 3) 00:08:45.129 5444.529 - 5469.735: 0.1274% ( 3) 00:08:45.129 5469.735 - 5494.942: 0.1613% ( 4) 00:08:45.129 5494.942 - 5520.148: 0.1783% ( 2) 00:08:45.129 5520.148 - 5545.354: 0.1953% ( 2) 00:08:45.129 5545.354 - 5570.560: 0.2208% ( 3) 00:08:45.130 5570.560 - 5595.766: 0.2548% ( 4) 00:08:45.130 5595.766 - 5620.972: 0.3312% ( 9) 00:08:45.130 5620.972 - 5646.178: 0.3736% ( 5) 00:08:45.130 5646.178 - 5671.385: 0.4161% ( 5) 00:08:45.130 5671.385 - 5696.591: 0.4671% ( 6) 00:08:45.130 5696.591 - 5721.797: 0.5605% ( 11) 00:08:45.130 5721.797 - 5747.003: 0.6793% ( 14) 00:08:45.130 5747.003 - 5772.209: 1.0700% ( 46) 00:08:45.130 5772.209 - 5797.415: 1.1634% ( 11) 00:08:45.130 5797.415 - 5822.622: 1.2398% ( 9) 00:08:45.130 5822.622 - 5847.828: 1.2823% ( 5) 00:08:45.130 5847.828 - 5873.034: 1.3587% ( 9) 00:08:45.130 5873.034 - 5898.240: 1.4181% ( 7) 00:08:45.130 5898.240 - 5923.446: 1.4776% ( 7) 00:08:45.130 5923.446 - 5948.652: 1.5710% ( 11) 00:08:45.130 5948.652 - 5973.858: 1.6559% ( 10) 00:08:45.130 5973.858 - 5999.065: 1.7663% ( 13) 00:08:45.130 5999.065 - 6024.271: 1.9956% ( 27) 00:08:45.130 6024.271 - 6049.477: 2.6070% ( 72) 00:08:45.130 6049.477 - 6074.683: 2.6919% ( 10) 00:08:45.130 6074.683 - 6099.889: 2.8023% ( 13) 00:08:45.130 6099.889 - 6125.095: 2.9127% ( 13) 00:08:45.130 6125.095 - 6150.302: 3.0231% ( 13) 00:08:45.130 6150.302 - 6175.508: 3.1250% ( 12) 00:08:45.130 6175.508 - 6200.714: 3.2439% ( 14) 00:08:45.130 6200.714 - 6225.920: 3.3713% ( 15) 00:08:45.130 6225.920 - 6251.126: 3.5156% ( 17) 00:08:45.130 6251.126 - 6276.332: 3.6770% ( 19) 00:08:45.130 6276.332 - 6301.538: 3.8128% ( 16) 00:08:45.130 6301.538 - 6326.745: 3.9742% ( 19) 00:08:45.130 6326.745 - 6351.951: 4.1355% ( 19) 00:08:45.130 6351.951 - 6377.157: 4.4243% ( 34) 00:08:45.130 6377.157 - 6402.363: 4.7045% ( 33) 00:08:45.130 6402.363 - 6427.569: 5.3753% ( 79) 00:08:45.130 6427.569 - 6452.775: 5.6131% ( 28) 00:08:45.130 6452.775 - 6503.188: 5.9613% ( 41) 00:08:45.130 6503.188 - 6553.600: 6.4113% ( 53) 00:08:45.130 6553.600 - 6604.012: 6.9039% ( 58) 00:08:45.130 6604.012 - 6654.425: 7.2520% ( 41) 00:08:45.130 6654.425 - 6704.837: 7.8889% ( 75) 00:08:45.130 6704.837 - 6755.249: 8.7636% ( 103) 00:08:45.130 6755.249 - 6805.662: 9.3156% ( 65) 00:08:45.130 6805.662 - 6856.074: 9.6722% ( 42) 00:08:45.130 6856.074 - 6906.486: 9.9609% ( 34) 00:08:45.130 6906.486 - 6956.898: 10.2072% ( 29) 00:08:45.130 6956.898 - 7007.311: 10.4620% ( 30) 00:08:45.130 7007.311 - 7057.723: 10.6743% ( 25) 00:08:45.130 7057.723 - 7108.135: 11.0734% ( 47) 00:08:45.130 7108.135 - 7158.548: 11.3536% ( 33) 00:08:45.130 7158.548 - 7208.960: 11.4810% ( 15) 00:08:45.130 7208.960 - 7259.372: 11.5574% ( 9) 00:08:45.130 7259.372 - 7309.785: 11.6338% ( 9) 00:08:45.130 7309.785 - 7360.197: 11.6763% ( 5) 00:08:45.130 7360.197 - 7410.609: 11.7357% ( 7) 00:08:45.130 7410.609 - 7461.022: 11.7782% ( 5) 00:08:45.130 7461.022 - 7511.434: 11.8461% ( 8) 00:08:45.130 7511.434 - 7561.846: 11.8886% ( 5) 00:08:45.130 7561.846 - 7612.258: 11.9395% ( 6) 00:08:45.130 7612.258 - 7662.671: 12.0754% ( 16) 00:08:45.130 7662.671 - 7713.083: 12.3302% ( 30) 00:08:45.130 7713.083 - 7763.495: 12.6698% ( 40) 00:08:45.130 7763.495 - 7813.908: 12.9925% ( 38) 00:08:45.130 7813.908 - 7864.320: 13.4086% ( 49) 00:08:45.130 7864.320 - 7914.732: 13.9521% ( 64) 00:08:45.130 7914.732 - 7965.145: 14.4531% ( 59) 00:08:45.130 7965.145 - 8015.557: 14.9287% ( 56) 00:08:45.130 8015.557 - 8065.969: 15.4467% ( 61) 00:08:45.130 8065.969 - 8116.382: 16.1600% ( 84) 00:08:45.130 8116.382 - 8166.794: 16.7289% ( 67) 00:08:45.130 8166.794 - 8217.206: 17.2724% ( 64) 00:08:45.130 8217.206 - 8267.618: 17.9348% ( 78) 00:08:45.130 8267.618 - 8318.031: 18.5462% ( 72) 00:08:45.130 8318.031 - 8368.443: 19.3529% ( 95) 00:08:45.130 8368.443 - 8418.855: 20.2446% ( 105) 00:08:45.130 8418.855 - 8469.268: 20.9664% ( 85) 00:08:45.130 8469.268 - 8519.680: 21.7221% ( 89) 00:08:45.130 8519.680 - 8570.092: 22.4864% ( 90) 00:08:45.130 8570.092 - 8620.505: 23.1912% ( 83) 00:08:45.130 8620.505 - 8670.917: 23.9640% ( 91) 00:08:45.130 8670.917 - 8721.329: 25.0849% ( 132) 00:08:45.130 8721.329 - 8771.742: 25.9171% ( 98) 00:08:45.130 8771.742 - 8822.154: 26.6219% ( 83) 00:08:45.130 8822.154 - 8872.566: 27.2503% ( 74) 00:08:45.130 8872.566 - 8922.978: 28.0061% ( 89) 00:08:45.130 8922.978 - 8973.391: 28.7279% ( 85) 00:08:45.130 8973.391 - 9023.803: 29.4922% ( 90) 00:08:45.130 9023.803 - 9074.215: 30.2565% ( 90) 00:08:45.130 9074.215 - 9124.628: 30.9698% ( 84) 00:08:45.130 9124.628 - 9175.040: 31.7255% ( 89) 00:08:45.130 9175.040 - 9225.452: 32.4219% ( 82) 00:08:45.130 9225.452 - 9275.865: 32.9993% ( 68) 00:08:45.130 9275.865 - 9326.277: 33.4834% ( 57) 00:08:45.130 9326.277 - 9376.689: 33.9079% ( 50) 00:08:45.130 9376.689 - 9427.102: 34.3580% ( 53) 00:08:45.130 9427.102 - 9477.514: 34.8505% ( 58) 00:08:45.130 9477.514 - 9527.926: 35.5299% ( 80) 00:08:45.130 9527.926 - 9578.338: 36.1328% ( 71) 00:08:45.130 9578.338 - 9628.751: 36.6848% ( 65) 00:08:45.130 9628.751 - 9679.163: 37.2028% ( 61) 00:08:45.130 9679.163 - 9729.575: 37.7123% ( 60) 00:08:45.130 9729.575 - 9779.988: 38.3237% ( 72) 00:08:45.130 9779.988 - 9830.400: 38.9521% ( 74) 00:08:45.130 9830.400 - 9880.812: 39.6484% ( 82) 00:08:45.130 9880.812 - 9931.225: 40.3533% ( 83) 00:08:45.130 9931.225 - 9981.637: 41.0581% ( 83) 00:08:45.130 9981.637 - 10032.049: 41.8903% ( 98) 00:08:45.130 10032.049 - 10082.462: 42.6461% ( 89) 00:08:45.130 10082.462 - 10132.874: 43.3424% ( 82) 00:08:45.130 10132.874 - 10183.286: 44.9643% ( 191) 00:08:45.130 10183.286 - 10233.698: 45.6522% ( 81) 00:08:45.130 10233.698 - 10284.111: 46.3145% ( 78) 00:08:45.130 10284.111 - 10334.523: 47.2147% ( 106) 00:08:45.130 10334.523 - 10384.935: 48.1827% ( 114) 00:08:45.130 10384.935 - 10435.348: 49.2188% ( 122) 00:08:45.130 10435.348 - 10485.760: 49.7283% ( 60) 00:08:45.130 10485.760 - 10536.172: 50.2208% ( 58) 00:08:45.130 10536.172 - 10586.585: 50.8407% ( 73) 00:08:45.130 10586.585 - 10636.997: 51.2993% ( 54) 00:08:45.130 10636.997 - 10687.409: 51.7833% ( 57) 00:08:45.130 10687.409 - 10737.822: 52.1994% ( 49) 00:08:45.130 10737.822 - 10788.234: 52.6325% ( 51) 00:08:45.130 10788.234 - 10838.646: 53.1335% ( 59) 00:08:45.130 10838.646 - 10889.058: 53.6515% ( 61) 00:08:45.130 10889.058 - 10939.471: 54.1270% ( 56) 00:08:45.130 10939.471 - 10989.883: 54.6450% ( 61) 00:08:45.130 10989.883 - 11040.295: 55.2140% ( 67) 00:08:45.130 11040.295 - 11090.708: 55.7829% ( 67) 00:08:45.130 11090.708 - 11141.120: 56.4283% ( 76) 00:08:45.130 11141.120 - 11191.532: 57.0058% ( 68) 00:08:45.130 11191.532 - 11241.945: 57.6766% ( 79) 00:08:45.130 11241.945 - 11292.357: 58.2711% ( 70) 00:08:45.130 11292.357 - 11342.769: 58.8655% ( 70) 00:08:45.130 11342.769 - 11393.182: 59.3835% ( 61) 00:08:45.130 11393.182 - 11443.594: 59.8675% ( 57) 00:08:45.130 11443.594 - 11494.006: 60.4280% ( 66) 00:08:45.130 11494.006 - 11544.418: 61.0734% ( 76) 00:08:45.130 11544.418 - 11594.831: 61.7442% ( 79) 00:08:45.130 11594.831 - 11645.243: 62.1943% ( 53) 00:08:45.130 11645.243 - 11695.655: 62.6444% ( 53) 00:08:45.130 11695.655 - 11746.068: 63.0774% ( 51) 00:08:45.130 11746.068 - 11796.480: 63.5020% ( 50) 00:08:45.130 11796.480 - 11846.892: 64.0370% ( 63) 00:08:45.130 11846.892 - 11897.305: 64.6145% ( 68) 00:08:45.130 11897.305 - 11947.717: 65.3278% ( 84) 00:08:45.130 11947.717 - 11998.129: 66.0071% ( 80) 00:08:45.130 11998.129 - 12048.542: 66.8308% ( 97) 00:08:45.130 12048.542 - 12098.954: 67.5866% ( 89) 00:08:45.130 12098.954 - 12149.366: 68.3254% ( 87) 00:08:45.130 12149.366 - 12199.778: 68.9113% ( 69) 00:08:45.130 12199.778 - 12250.191: 69.4803% ( 67) 00:08:45.130 12250.191 - 12300.603: 70.0493% ( 67) 00:08:45.130 12300.603 - 12351.015: 70.6437% ( 70) 00:08:45.130 12351.015 - 12401.428: 71.2636% ( 73) 00:08:45.130 12401.428 - 12451.840: 71.9514% ( 81) 00:08:45.130 12451.840 - 12502.252: 72.6817% ( 86) 00:08:45.130 12502.252 - 12552.665: 73.3950% ( 84) 00:08:45.130 12552.665 - 12603.077: 74.2018% ( 95) 00:08:45.130 12603.077 - 12653.489: 75.1444% ( 111) 00:08:45.130 12653.489 - 12703.902: 75.7897% ( 76) 00:08:45.130 12703.902 - 12754.314: 76.5710% ( 92) 00:08:45.130 12754.314 - 12804.726: 77.2249% ( 77) 00:08:45.130 12804.726 - 12855.138: 77.8278% ( 71) 00:08:45.130 12855.138 - 12905.551: 78.5751% ( 88) 00:08:45.130 12905.551 - 13006.375: 80.1800% ( 189) 00:08:45.130 13006.375 - 13107.200: 81.5048% ( 156) 00:08:45.130 13107.200 - 13208.025: 82.7955% ( 152) 00:08:45.130 13208.025 - 13308.849: 83.8485% ( 124) 00:08:45.130 13308.849 - 13409.674: 85.0968% ( 147) 00:08:45.130 13409.674 - 13510.498: 86.3196% ( 144) 00:08:45.130 13510.498 - 13611.323: 87.3896% ( 126) 00:08:45.130 13611.323 - 13712.148: 88.3492% ( 113) 00:08:45.130 13712.148 - 13812.972: 89.0370% ( 81) 00:08:45.130 13812.972 - 13913.797: 89.7503% ( 84) 00:08:45.130 13913.797 - 14014.622: 90.5146% ( 90) 00:08:45.130 14014.622 - 14115.446: 91.0666% ( 65) 00:08:45.130 14115.446 - 14216.271: 91.5421% ( 56) 00:08:45.130 14216.271 - 14317.095: 91.9837% ( 52) 00:08:45.130 14317.095 - 14417.920: 92.4253% ( 52) 00:08:45.130 14417.920 - 14518.745: 92.8159% ( 46) 00:08:45.130 14518.745 - 14619.569: 93.1641% ( 41) 00:08:45.130 14619.569 - 14720.394: 93.6651% ( 59) 00:08:45.130 14720.394 - 14821.218: 94.0302% ( 43) 00:08:45.130 14821.218 - 14922.043: 94.2425% ( 25) 00:08:45.130 14922.043 - 15022.868: 94.4633% ( 26) 00:08:45.130 15022.868 - 15123.692: 94.6926% ( 27) 00:08:45.130 15123.692 - 15224.517: 94.9813% ( 34) 00:08:45.130 15224.517 - 15325.342: 95.2785% ( 35) 00:08:45.130 15325.342 - 15426.166: 95.5078% ( 27) 00:08:45.131 15426.166 - 15526.991: 95.6692% ( 19) 00:08:45.131 15526.991 - 15627.815: 95.8305% ( 19) 00:08:45.131 15627.815 - 15728.640: 96.0173% ( 22) 00:08:45.131 15728.640 - 15829.465: 96.2466% ( 27) 00:08:45.131 15829.465 - 15930.289: 96.3995% ( 18) 00:08:45.131 15930.289 - 16031.114: 96.5438% ( 17) 00:08:45.131 16031.114 - 16131.938: 96.7391% ( 23) 00:08:45.131 16131.938 - 16232.763: 96.9344% ( 23) 00:08:45.131 16232.763 - 16333.588: 97.1298% ( 23) 00:08:45.131 16333.588 - 16434.412: 97.2996% ( 20) 00:08:45.131 16434.412 - 16535.237: 97.4440% ( 17) 00:08:45.131 16535.237 - 16636.062: 97.5543% ( 13) 00:08:45.131 16636.062 - 16736.886: 97.6308% ( 9) 00:08:45.131 16736.886 - 16837.711: 97.6817% ( 6) 00:08:45.131 16837.711 - 16938.535: 97.7072% ( 3) 00:08:45.131 16938.535 - 17039.360: 97.7412% ( 4) 00:08:45.131 17039.360 - 17140.185: 97.7751% ( 4) 00:08:45.131 17140.185 - 17241.009: 97.8006% ( 3) 00:08:45.131 17241.009 - 17341.834: 97.8176% ( 2) 00:08:45.131 17341.834 - 17442.658: 97.8261% ( 1) 00:08:45.131 17644.308 - 17745.132: 97.8346% ( 1) 00:08:45.131 17745.132 - 17845.957: 97.8601% ( 3) 00:08:45.131 17845.957 - 17946.782: 97.9025% ( 5) 00:08:45.131 17946.782 - 18047.606: 97.9365% ( 4) 00:08:45.131 18047.606 - 18148.431: 97.9704% ( 4) 00:08:45.131 18148.431 - 18249.255: 98.0214% ( 6) 00:08:45.131 18249.255 - 18350.080: 98.0893% ( 8) 00:08:45.131 18350.080 - 18450.905: 98.1743% ( 10) 00:08:45.131 18450.905 - 18551.729: 98.2762% ( 12) 00:08:45.131 18551.729 - 18652.554: 98.4035% ( 15) 00:08:45.131 18652.554 - 18753.378: 98.5819% ( 21) 00:08:45.131 18753.378 - 18854.203: 98.6243% ( 5) 00:08:45.131 18854.203 - 18955.028: 98.6668% ( 5) 00:08:45.131 18955.028 - 19055.852: 98.7092% ( 5) 00:08:45.131 19055.852 - 19156.677: 98.7517% ( 5) 00:08:45.131 19156.677 - 19257.502: 98.7942% ( 5) 00:08:45.131 19257.502 - 19358.326: 98.8366% ( 5) 00:08:45.131 19358.326 - 19459.151: 98.8791% ( 5) 00:08:45.131 19459.151 - 19559.975: 98.9130% ( 4) 00:08:45.131 29844.086 - 30045.735: 98.9980% ( 10) 00:08:45.131 30045.735 - 30247.385: 99.0999% ( 12) 00:08:45.131 30247.385 - 30449.034: 99.1933% ( 11) 00:08:45.131 30449.034 - 30650.683: 99.3037% ( 13) 00:08:45.131 30650.683 - 30852.332: 99.4141% ( 13) 00:08:45.131 30852.332 - 31053.982: 99.5160% ( 12) 00:08:45.131 31053.982 - 31255.631: 99.6264% ( 13) 00:08:45.131 31255.631 - 31457.280: 99.7283% ( 12) 00:08:45.131 31457.280 - 31658.929: 99.8387% ( 13) 00:08:45.131 31658.929 - 31860.578: 99.9321% ( 11) 00:08:45.131 31860.578 - 32062.228: 100.0000% ( 8) 00:08:45.131 00:08:45.131 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:45.131 ============================================================================== 00:08:45.131 Range in us Cumulative IO count 00:08:45.131 5167.262 - 5192.468: 0.0085% ( 1) 00:08:45.131 5192.468 - 5217.674: 0.0255% ( 2) 00:08:45.131 5368.911 - 5394.117: 0.0340% ( 1) 00:08:45.131 5419.323 - 5444.529: 0.0594% ( 3) 00:08:45.131 5444.529 - 5469.735: 0.1104% ( 6) 00:08:45.131 5469.735 - 5494.942: 0.1444% ( 4) 00:08:45.131 5494.942 - 5520.148: 0.1783% ( 4) 00:08:45.131 5520.148 - 5545.354: 0.2208% ( 5) 00:08:45.131 5545.354 - 5570.560: 0.2548% ( 4) 00:08:45.131 5570.560 - 5595.766: 0.2972% ( 5) 00:08:45.131 5595.766 - 5620.972: 0.3651% ( 8) 00:08:45.131 5620.972 - 5646.178: 0.4161% ( 6) 00:08:45.131 5646.178 - 5671.385: 0.4840% ( 8) 00:08:45.131 5671.385 - 5696.591: 0.5350% ( 6) 00:08:45.131 5696.591 - 5721.797: 0.6029% ( 8) 00:08:45.131 5721.797 - 5747.003: 0.6624% ( 7) 00:08:45.131 5747.003 - 5772.209: 0.7218% ( 7) 00:08:45.131 5772.209 - 5797.415: 0.7982% ( 9) 00:08:45.131 5797.415 - 5822.622: 0.9086% ( 13) 00:08:45.131 5822.622 - 5847.828: 1.0275% ( 14) 00:08:45.131 5847.828 - 5873.034: 1.3247% ( 35) 00:08:45.131 5873.034 - 5898.240: 1.4351% ( 13) 00:08:45.131 5898.240 - 5923.446: 1.5455% ( 13) 00:08:45.131 5923.446 - 5948.652: 1.7238% ( 21) 00:08:45.131 5948.652 - 5973.858: 1.9786% ( 30) 00:08:45.131 5973.858 - 5999.065: 2.3438% ( 43) 00:08:45.131 5999.065 - 6024.271: 2.5136% ( 20) 00:08:45.131 6024.271 - 6049.477: 2.7429% ( 27) 00:08:45.131 6049.477 - 6074.683: 2.8787% ( 16) 00:08:45.131 6074.683 - 6099.889: 3.0316% ( 18) 00:08:45.131 6099.889 - 6125.095: 3.1335% ( 12) 00:08:45.131 6125.095 - 6150.302: 3.2694% ( 16) 00:08:45.131 6150.302 - 6175.508: 3.3882% ( 14) 00:08:45.131 6175.508 - 6200.714: 3.5411% ( 18) 00:08:45.131 6200.714 - 6225.920: 3.7024% ( 19) 00:08:45.131 6225.920 - 6251.126: 3.8808% ( 21) 00:08:45.131 6251.126 - 6276.332: 4.0336% ( 18) 00:08:45.131 6276.332 - 6301.538: 4.2289% ( 23) 00:08:45.131 6301.538 - 6326.745: 4.4922% ( 31) 00:08:45.131 6326.745 - 6351.951: 4.9168% ( 50) 00:08:45.131 6351.951 - 6377.157: 5.4178% ( 59) 00:08:45.131 6377.157 - 6402.363: 5.6980% ( 33) 00:08:45.131 6402.363 - 6427.569: 5.9528% ( 30) 00:08:45.131 6427.569 - 6452.775: 6.1821% ( 27) 00:08:45.131 6452.775 - 6503.188: 6.6661% ( 57) 00:08:45.131 6503.188 - 6553.600: 7.1926% ( 62) 00:08:45.131 6553.600 - 6604.012: 7.9654% ( 91) 00:08:45.131 6604.012 - 6654.425: 8.6277% ( 78) 00:08:45.131 6654.425 - 6704.837: 8.9504% ( 38) 00:08:45.131 6704.837 - 6755.249: 9.3665% ( 49) 00:08:45.131 6755.249 - 6805.662: 9.8675% ( 59) 00:08:45.131 6805.662 - 6856.074: 10.1138% ( 29) 00:08:45.131 6856.074 - 6906.486: 10.3261% ( 25) 00:08:45.131 6906.486 - 6956.898: 10.5384% ( 25) 00:08:45.131 6956.898 - 7007.311: 10.7422% ( 24) 00:08:45.131 7007.311 - 7057.723: 10.8526% ( 13) 00:08:45.131 7057.723 - 7108.135: 10.9715% ( 14) 00:08:45.131 7108.135 - 7158.548: 11.5234% ( 65) 00:08:45.131 7158.548 - 7208.960: 11.6848% ( 19) 00:08:45.131 7208.960 - 7259.372: 11.8037% ( 14) 00:08:45.131 7259.372 - 7309.785: 11.9310% ( 15) 00:08:45.131 7309.785 - 7360.197: 12.0669% ( 16) 00:08:45.131 7360.197 - 7410.609: 12.2028% ( 16) 00:08:45.131 7410.609 - 7461.022: 12.3811% ( 21) 00:08:45.131 7461.022 - 7511.434: 12.4745% ( 11) 00:08:45.131 7511.434 - 7561.846: 12.6104% ( 16) 00:08:45.131 7561.846 - 7612.258: 12.7463% ( 16) 00:08:45.131 7612.258 - 7662.671: 12.9076% ( 19) 00:08:45.131 7662.671 - 7713.083: 13.0435% ( 16) 00:08:45.131 7713.083 - 7763.495: 13.2303% ( 22) 00:08:45.131 7763.495 - 7813.908: 13.4935% ( 31) 00:08:45.131 7813.908 - 7864.320: 13.8587% ( 43) 00:08:45.131 7864.320 - 7914.732: 14.1814% ( 38) 00:08:45.131 7914.732 - 7965.145: 14.5890% ( 48) 00:08:45.131 7965.145 - 8015.557: 15.1834% ( 70) 00:08:45.131 8015.557 - 8065.969: 15.5656% ( 45) 00:08:45.131 8065.969 - 8116.382: 15.9732% ( 48) 00:08:45.131 8116.382 - 8166.794: 16.3298% ( 42) 00:08:45.131 8166.794 - 8217.206: 16.7629% ( 51) 00:08:45.131 8217.206 - 8267.618: 17.2385% ( 56) 00:08:45.131 8267.618 - 8318.031: 17.6970% ( 54) 00:08:45.131 8318.031 - 8368.443: 18.1810% ( 57) 00:08:45.131 8368.443 - 8418.855: 18.8519% ( 79) 00:08:45.131 8418.855 - 8469.268: 19.5567% ( 83) 00:08:45.131 8469.268 - 8519.680: 20.1596% ( 71) 00:08:45.131 8519.680 - 8570.092: 20.8220% ( 78) 00:08:45.131 8570.092 - 8620.505: 21.4249% ( 71) 00:08:45.131 8620.505 - 8670.917: 22.4100% ( 116) 00:08:45.131 8670.917 - 8721.329: 23.1488% ( 87) 00:08:45.131 8721.329 - 8771.742: 23.9046% ( 89) 00:08:45.131 8771.742 - 8822.154: 24.6518% ( 88) 00:08:45.131 8822.154 - 8872.566: 25.4586% ( 95) 00:08:45.131 8872.566 - 8922.978: 26.2568% ( 94) 00:08:45.131 8922.978 - 8973.391: 27.2758% ( 120) 00:08:45.131 8973.391 - 9023.803: 28.4477% ( 138) 00:08:45.131 9023.803 - 9074.215: 29.3224% ( 103) 00:08:45.131 9074.215 - 9124.628: 30.1715% ( 100) 00:08:45.131 9124.628 - 9175.040: 31.1481% ( 115) 00:08:45.131 9175.040 - 9225.452: 32.0143% ( 102) 00:08:45.131 9225.452 - 9275.865: 32.7276% ( 84) 00:08:45.131 9275.865 - 9326.277: 33.8570% ( 133) 00:08:45.131 9326.277 - 9376.689: 34.5363% ( 80) 00:08:45.131 9376.689 - 9427.102: 35.1902% ( 77) 00:08:45.131 9427.102 - 9477.514: 35.8356% ( 76) 00:08:45.131 9477.514 - 9527.926: 36.5829% ( 88) 00:08:45.131 9527.926 - 9578.338: 37.2113% ( 74) 00:08:45.131 9578.338 - 9628.751: 37.6868% ( 56) 00:08:45.131 9628.751 - 9679.163: 38.1878% ( 59) 00:08:45.131 9679.163 - 9729.575: 38.7653% ( 68) 00:08:45.131 9729.575 - 9779.988: 39.3682% ( 71) 00:08:45.131 9779.988 - 9830.400: 40.0476% ( 80) 00:08:45.131 9830.400 - 9880.812: 40.7014% ( 77) 00:08:45.131 9880.812 - 9931.225: 41.4232% ( 85) 00:08:45.131 9931.225 - 9981.637: 42.1620% ( 87) 00:08:45.131 9981.637 - 10032.049: 42.8584% ( 82) 00:08:45.131 10032.049 - 10082.462: 43.5971% ( 87) 00:08:45.131 10082.462 - 10132.874: 44.2510% ( 77) 00:08:45.131 10132.874 - 10183.286: 45.0577% ( 95) 00:08:45.131 10183.286 - 10233.698: 45.7371% ( 80) 00:08:45.131 10233.698 - 10284.111: 46.3910% ( 77) 00:08:45.131 10284.111 - 10334.523: 47.0703% ( 80) 00:08:45.131 10334.523 - 10384.935: 47.7412% ( 79) 00:08:45.131 10384.935 - 10435.348: 48.3696% ( 74) 00:08:45.131 10435.348 - 10485.760: 48.9725% ( 71) 00:08:45.131 10485.760 - 10536.172: 49.5584% ( 69) 00:08:45.131 10536.172 - 10586.585: 50.6454% ( 128) 00:08:45.131 10586.585 - 10636.997: 51.3332% ( 81) 00:08:45.131 10636.997 - 10687.409: 51.9871% ( 77) 00:08:45.131 10687.409 - 10737.822: 52.5476% ( 66) 00:08:45.131 10737.822 - 10788.234: 53.1250% ( 68) 00:08:45.131 10788.234 - 10838.646: 53.7024% ( 68) 00:08:45.131 10838.646 - 10889.058: 54.3648% ( 78) 00:08:45.131 10889.058 - 10939.471: 55.1800% ( 96) 00:08:45.131 10939.471 - 10989.883: 55.6301% ( 53) 00:08:45.132 10989.883 - 11040.295: 56.0802% ( 53) 00:08:45.132 11040.295 - 11090.708: 56.4623% ( 45) 00:08:45.132 11090.708 - 11141.120: 56.8614% ( 47) 00:08:45.132 11141.120 - 11191.532: 57.3709% ( 60) 00:08:45.132 11191.532 - 11241.945: 57.8465% ( 56) 00:08:45.132 11241.945 - 11292.357: 58.2880% ( 52) 00:08:45.132 11292.357 - 11342.769: 58.8740% ( 69) 00:08:45.132 11342.769 - 11393.182: 59.4429% ( 67) 00:08:45.132 11393.182 - 11443.594: 59.9779% ( 63) 00:08:45.132 11443.594 - 11494.006: 60.5808% ( 71) 00:08:45.132 11494.006 - 11544.418: 61.0904% ( 60) 00:08:45.132 11544.418 - 11594.831: 61.5404% ( 53) 00:08:45.132 11594.831 - 11645.243: 61.9820% ( 52) 00:08:45.132 11645.243 - 11695.655: 62.4406% ( 54) 00:08:45.132 11695.655 - 11746.068: 63.0435% ( 71) 00:08:45.132 11746.068 - 11796.480: 63.7738% ( 86) 00:08:45.132 11796.480 - 11846.892: 64.4276% ( 77) 00:08:45.132 11846.892 - 11897.305: 64.9287% ( 59) 00:08:45.132 11897.305 - 11947.717: 65.4721% ( 64) 00:08:45.132 11947.717 - 11998.129: 65.9647% ( 58) 00:08:45.132 11998.129 - 12048.542: 66.5421% ( 68) 00:08:45.132 12048.542 - 12098.954: 67.1960% ( 77) 00:08:45.132 12098.954 - 12149.366: 67.7565% ( 66) 00:08:45.132 12149.366 - 12199.778: 68.3849% ( 74) 00:08:45.132 12199.778 - 12250.191: 69.1916% ( 95) 00:08:45.132 12250.191 - 12300.603: 69.7860% ( 70) 00:08:45.132 12300.603 - 12351.015: 70.3635% ( 68) 00:08:45.132 12351.015 - 12401.428: 71.0598% ( 82) 00:08:45.132 12401.428 - 12451.840: 71.7986% ( 87) 00:08:45.132 12451.840 - 12502.252: 72.5459% ( 88) 00:08:45.132 12502.252 - 12552.665: 73.4885% ( 111) 00:08:45.132 12552.665 - 12603.077: 74.5160% ( 121) 00:08:45.132 12603.077 - 12653.489: 75.2123% ( 82) 00:08:45.132 12653.489 - 12703.902: 75.8832% ( 79) 00:08:45.132 12703.902 - 12754.314: 76.6729% ( 93) 00:08:45.132 12754.314 - 12804.726: 77.3777% ( 83) 00:08:45.132 12804.726 - 12855.138: 78.2354% ( 101) 00:08:45.132 12855.138 - 12905.551: 79.0846% ( 100) 00:08:45.132 12905.551 - 13006.375: 80.6980% ( 190) 00:08:45.132 13006.375 - 13107.200: 82.3200% ( 191) 00:08:45.132 13107.200 - 13208.025: 84.0183% ( 200) 00:08:45.132 13208.025 - 13308.849: 85.2836% ( 149) 00:08:45.132 13308.849 - 13409.674: 86.4725% ( 140) 00:08:45.132 13409.674 - 13510.498: 87.4575% ( 116) 00:08:45.132 13510.498 - 13611.323: 88.4256% ( 114) 00:08:45.132 13611.323 - 13712.148: 89.2748% ( 100) 00:08:45.132 13712.148 - 13812.972: 90.0306% ( 89) 00:08:45.132 13812.972 - 13913.797: 90.6760% ( 76) 00:08:45.132 13913.797 - 14014.622: 91.1090% ( 51) 00:08:45.132 14014.622 - 14115.446: 91.4997% ( 46) 00:08:45.132 14115.446 - 14216.271: 91.8224% ( 38) 00:08:45.132 14216.271 - 14317.095: 92.2045% ( 45) 00:08:45.132 14317.095 - 14417.920: 92.5696% ( 43) 00:08:45.132 14417.920 - 14518.745: 92.8838% ( 37) 00:08:45.132 14518.745 - 14619.569: 93.1216% ( 28) 00:08:45.132 14619.569 - 14720.394: 93.4273% ( 36) 00:08:45.132 14720.394 - 14821.218: 93.7075% ( 33) 00:08:45.132 14821.218 - 14922.043: 93.9878% ( 33) 00:08:45.132 14922.043 - 15022.868: 94.2340% ( 29) 00:08:45.132 15022.868 - 15123.692: 94.4633% ( 27) 00:08:45.132 15123.692 - 15224.517: 94.7096% ( 29) 00:08:45.132 15224.517 - 15325.342: 95.0153% ( 36) 00:08:45.132 15325.342 - 15426.166: 95.3210% ( 36) 00:08:45.132 15426.166 - 15526.991: 95.6692% ( 41) 00:08:45.132 15526.991 - 15627.815: 95.9918% ( 38) 00:08:45.132 15627.815 - 15728.640: 96.3230% ( 39) 00:08:45.132 15728.640 - 15829.465: 96.6627% ( 40) 00:08:45.132 15829.465 - 15930.289: 96.8920% ( 27) 00:08:45.132 15930.289 - 16031.114: 97.0618% ( 20) 00:08:45.132 16031.114 - 16131.938: 97.2486% ( 22) 00:08:45.132 16131.938 - 16232.763: 97.4270% ( 21) 00:08:45.132 16232.763 - 16333.588: 97.5883% ( 19) 00:08:45.132 16333.588 - 16434.412: 97.7072% ( 14) 00:08:45.132 16434.412 - 16535.237: 97.7751% ( 8) 00:08:45.132 16535.237 - 16636.062: 97.8091% ( 4) 00:08:45.132 16636.062 - 16736.886: 97.8261% ( 2) 00:08:45.132 17845.957 - 17946.782: 97.8346% ( 1) 00:08:45.132 17946.782 - 18047.606: 97.8601% ( 3) 00:08:45.132 18047.606 - 18148.431: 97.9025% ( 5) 00:08:45.132 18148.431 - 18249.255: 97.9365% ( 4) 00:08:45.132 18249.255 - 18350.080: 97.9704% ( 4) 00:08:45.132 18350.080 - 18450.905: 98.0384% ( 8) 00:08:45.132 18450.905 - 18551.729: 98.1148% ( 9) 00:08:45.132 18551.729 - 18652.554: 98.1573% ( 5) 00:08:45.132 18652.554 - 18753.378: 98.1997% ( 5) 00:08:45.132 18753.378 - 18854.203: 98.2422% ( 5) 00:08:45.132 18854.203 - 18955.028: 98.2846% ( 5) 00:08:45.132 18955.028 - 19055.852: 98.3271% ( 5) 00:08:45.132 19055.852 - 19156.677: 98.3611% ( 4) 00:08:45.132 19156.677 - 19257.502: 98.4120% ( 6) 00:08:45.132 19257.502 - 19358.326: 98.4630% ( 6) 00:08:45.132 19358.326 - 19459.151: 98.5139% ( 6) 00:08:45.132 19459.151 - 19559.975: 98.5564% ( 5) 00:08:45.132 19559.975 - 19660.800: 98.6073% ( 6) 00:08:45.132 19660.800 - 19761.625: 98.6498% ( 5) 00:08:45.132 19761.625 - 19862.449: 98.6923% ( 5) 00:08:45.132 19862.449 - 19963.274: 98.7347% ( 5) 00:08:45.132 19963.274 - 20064.098: 98.7857% ( 6) 00:08:45.132 20064.098 - 20164.923: 98.8281% ( 5) 00:08:45.132 20164.923 - 20265.748: 98.8706% ( 5) 00:08:45.132 20265.748 - 20366.572: 98.9046% ( 4) 00:08:45.132 20366.572 - 20467.397: 98.9130% ( 1) 00:08:45.132 30650.683 - 30852.332: 98.9980% ( 10) 00:08:45.132 30852.332 - 31053.982: 99.0999% ( 12) 00:08:45.132 31053.982 - 31255.631: 99.2018% ( 12) 00:08:45.132 31255.631 - 31457.280: 99.3037% ( 12) 00:08:45.132 31457.280 - 31658.929: 99.4056% ( 12) 00:08:45.132 31658.929 - 31860.578: 99.4990% ( 11) 00:08:45.132 31860.578 - 32062.228: 99.5924% ( 11) 00:08:45.132 32062.228 - 32263.877: 99.6943% ( 12) 00:08:45.132 32263.877 - 32465.526: 99.7877% ( 11) 00:08:45.132 32465.526 - 32667.175: 99.8811% ( 11) 00:08:45.132 32667.175 - 32868.825: 99.9745% ( 11) 00:08:45.132 32868.825 - 33070.474: 100.0000% ( 3) 00:08:45.132 00:08:45.132 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:45.132 ============================================================================== 00:08:45.132 Range in us Cumulative IO count 00:08:45.132 5318.498 - 5343.705: 0.0084% ( 1) 00:08:45.132 5368.911 - 5394.117: 0.0168% ( 1) 00:08:45.132 5394.117 - 5419.323: 0.0420% ( 3) 00:08:45.132 5419.323 - 5444.529: 0.0672% ( 3) 00:08:45.132 5444.529 - 5469.735: 0.0840% ( 2) 00:08:45.132 5469.735 - 5494.942: 0.1092% ( 3) 00:08:45.132 5494.942 - 5520.148: 0.1428% ( 4) 00:08:45.132 5520.148 - 5545.354: 0.1680% ( 3) 00:08:45.132 5545.354 - 5570.560: 0.2268% ( 7) 00:08:45.132 5570.560 - 5595.766: 0.2688% ( 5) 00:08:45.132 5595.766 - 5620.972: 0.3192% ( 6) 00:08:45.132 5620.972 - 5646.178: 0.3696% ( 6) 00:08:45.132 5646.178 - 5671.385: 0.4284% ( 7) 00:08:45.132 5671.385 - 5696.591: 0.4704% ( 5) 00:08:45.132 5696.591 - 5721.797: 0.5460% ( 9) 00:08:45.132 5721.797 - 5747.003: 0.6468% ( 12) 00:08:45.132 5747.003 - 5772.209: 0.7560% ( 13) 00:08:45.132 5772.209 - 5797.415: 0.8737% ( 14) 00:08:45.132 5797.415 - 5822.622: 1.0249% ( 18) 00:08:45.132 5822.622 - 5847.828: 1.2013% ( 21) 00:08:45.132 5847.828 - 5873.034: 1.5373% ( 40) 00:08:45.132 5873.034 - 5898.240: 1.6801% ( 17) 00:08:45.132 5898.240 - 5923.446: 1.8145% ( 16) 00:08:45.132 5923.446 - 5948.652: 1.9405% ( 15) 00:08:45.132 5948.652 - 5973.858: 2.2597% ( 38) 00:08:45.132 5973.858 - 5999.065: 2.3774% ( 14) 00:08:45.132 5999.065 - 6024.271: 2.4698% ( 11) 00:08:45.132 6024.271 - 6049.477: 2.6462% ( 21) 00:08:45.132 6049.477 - 6074.683: 3.3098% ( 79) 00:08:45.132 6074.683 - 6099.889: 3.5198% ( 25) 00:08:45.132 6099.889 - 6125.095: 3.7466% ( 27) 00:08:45.132 6125.095 - 6150.302: 3.8978% ( 18) 00:08:45.132 6150.302 - 6175.508: 4.0407% ( 17) 00:08:45.132 6175.508 - 6200.714: 4.2591% ( 26) 00:08:45.132 6200.714 - 6225.920: 4.6203% ( 43) 00:08:45.132 6225.920 - 6251.126: 4.8387% ( 26) 00:08:45.132 6251.126 - 6276.332: 5.0151% ( 21) 00:08:45.132 6276.332 - 6301.538: 5.1747% ( 19) 00:08:45.132 6301.538 - 6326.745: 5.3595% ( 22) 00:08:45.132 6326.745 - 6351.951: 5.5192% ( 19) 00:08:45.132 6351.951 - 6377.157: 5.6788% ( 19) 00:08:45.132 6377.157 - 6402.363: 5.8300% ( 18) 00:08:45.132 6402.363 - 6427.569: 5.9812% ( 18) 00:08:45.132 6427.569 - 6452.775: 6.1744% ( 23) 00:08:45.132 6452.775 - 6503.188: 7.0901% ( 109) 00:08:45.132 6503.188 - 6553.600: 8.0393% ( 113) 00:08:45.132 6553.600 - 6604.012: 8.4509% ( 49) 00:08:45.132 6604.012 - 6654.425: 8.7534% ( 36) 00:08:45.132 6654.425 - 6704.837: 9.0726% ( 38) 00:08:45.132 6704.837 - 6755.249: 9.6606% ( 70) 00:08:45.132 6755.249 - 6805.662: 9.9546% ( 35) 00:08:45.132 6805.662 - 6856.074: 10.3075% ( 42) 00:08:45.132 6856.074 - 6906.486: 10.6435% ( 40) 00:08:45.132 6906.486 - 6956.898: 11.2819% ( 76) 00:08:45.132 6956.898 - 7007.311: 11.5507% ( 32) 00:08:45.132 7007.311 - 7057.723: 11.8364% ( 34) 00:08:45.132 7057.723 - 7108.135: 12.1220% ( 34) 00:08:45.132 7108.135 - 7158.548: 12.3656% ( 29) 00:08:45.132 7158.548 - 7208.960: 12.5504% ( 22) 00:08:45.132 7208.960 - 7259.372: 13.0040% ( 54) 00:08:45.132 7259.372 - 7309.785: 13.1552% ( 18) 00:08:45.132 7309.785 - 7360.197: 13.2981% ( 17) 00:08:45.132 7360.197 - 7410.609: 13.4745% ( 21) 00:08:45.132 7410.609 - 7461.022: 13.6257% ( 18) 00:08:45.132 7461.022 - 7511.434: 13.7769% ( 18) 00:08:45.132 7511.434 - 7561.846: 13.8945% ( 14) 00:08:45.132 7561.846 - 7612.258: 13.9953% ( 12) 00:08:45.132 7612.258 - 7662.671: 14.1717% ( 21) 00:08:45.132 7662.671 - 7713.083: 14.2977% ( 15) 00:08:45.132 7713.083 - 7763.495: 14.4825% ( 22) 00:08:45.133 7763.495 - 7813.908: 14.7513% ( 32) 00:08:45.133 7813.908 - 7864.320: 15.0622% ( 37) 00:08:45.133 7864.320 - 7914.732: 15.4066% ( 41) 00:08:45.133 7914.732 - 7965.145: 15.7426% ( 40) 00:08:45.133 7965.145 - 8015.557: 16.1374% ( 47) 00:08:45.133 8015.557 - 8065.969: 16.5155% ( 45) 00:08:45.133 8065.969 - 8116.382: 16.8851% ( 44) 00:08:45.133 8116.382 - 8166.794: 17.2883% ( 48) 00:08:45.133 8166.794 - 8217.206: 17.7587% ( 56) 00:08:45.133 8217.206 - 8267.618: 18.2460% ( 58) 00:08:45.133 8267.618 - 8318.031: 18.8424% ( 71) 00:08:45.133 8318.031 - 8368.443: 19.5144% ( 80) 00:08:45.133 8368.443 - 8418.855: 20.5477% ( 123) 00:08:45.133 8418.855 - 8469.268: 21.4718% ( 110) 00:08:45.133 8469.268 - 8519.680: 22.1270% ( 78) 00:08:45.133 8519.680 - 8570.092: 22.7487% ( 74) 00:08:45.133 8570.092 - 8620.505: 23.5299% ( 93) 00:08:45.133 8620.505 - 8670.917: 24.1599% ( 75) 00:08:45.133 8670.917 - 8721.329: 24.8740% ( 85) 00:08:45.133 8721.329 - 8771.742: 25.4788% ( 72) 00:08:45.133 8771.742 - 8822.154: 26.2097% ( 87) 00:08:45.133 8822.154 - 8872.566: 26.8229% ( 73) 00:08:45.133 8872.566 - 8922.978: 27.5286% ( 84) 00:08:45.133 8922.978 - 8973.391: 28.2594% ( 87) 00:08:45.133 8973.391 - 9023.803: 28.8306% ( 68) 00:08:45.133 9023.803 - 9074.215: 29.3347% ( 60) 00:08:45.133 9074.215 - 9124.628: 29.9311% ( 71) 00:08:45.133 9124.628 - 9175.040: 30.5612% ( 75) 00:08:45.133 9175.040 - 9225.452: 31.5860% ( 122) 00:08:45.133 9225.452 - 9275.865: 32.2749% ( 82) 00:08:45.133 9275.865 - 9326.277: 32.8797% ( 72) 00:08:45.133 9326.277 - 9376.689: 33.4845% ( 72) 00:08:45.133 9376.689 - 9427.102: 34.0054% ( 62) 00:08:45.133 9427.102 - 9477.514: 34.5094% ( 60) 00:08:45.133 9477.514 - 9527.926: 34.9798% ( 56) 00:08:45.133 9527.926 - 9578.338: 35.4839% ( 60) 00:08:45.133 9578.338 - 9628.751: 35.9123% ( 51) 00:08:45.133 9628.751 - 9679.163: 36.4415% ( 63) 00:08:45.133 9679.163 - 9729.575: 37.1136% ( 80) 00:08:45.133 9729.575 - 9779.988: 37.7100% ( 71) 00:08:45.133 9779.988 - 9830.400: 38.2560% ( 65) 00:08:45.133 9830.400 - 9880.812: 38.7769% ( 62) 00:08:45.133 9880.812 - 9931.225: 39.2725% ( 59) 00:08:45.133 9931.225 - 9981.637: 39.7849% ( 61) 00:08:45.133 9981.637 - 10032.049: 40.2638% ( 57) 00:08:45.133 10032.049 - 10082.462: 40.7762% ( 61) 00:08:45.133 10082.462 - 10132.874: 41.3054% ( 63) 00:08:45.133 10132.874 - 10183.286: 41.8767% ( 68) 00:08:45.133 10183.286 - 10233.698: 43.2880% ( 168) 00:08:45.133 10233.698 - 10284.111: 44.0272% ( 88) 00:08:45.133 10284.111 - 10334.523: 44.5817% ( 66) 00:08:45.133 10334.523 - 10384.935: 45.1277% ( 65) 00:08:45.133 10384.935 - 10435.348: 45.7997% ( 80) 00:08:45.133 10435.348 - 10485.760: 46.4886% ( 82) 00:08:45.133 10485.760 - 10536.172: 47.1186% ( 75) 00:08:45.133 10536.172 - 10586.585: 47.7571% ( 76) 00:08:45.133 10586.585 - 10636.997: 48.3283% ( 68) 00:08:45.133 10636.997 - 10687.409: 48.8407% ( 61) 00:08:45.133 10687.409 - 10737.822: 49.4288% ( 70) 00:08:45.133 10737.822 - 10788.234: 50.0336% ( 72) 00:08:45.133 10788.234 - 10838.646: 50.5292% ( 59) 00:08:45.133 10838.646 - 10889.058: 51.0333% ( 60) 00:08:45.133 10889.058 - 10939.471: 51.6717% ( 76) 00:08:45.133 10939.471 - 10989.883: 52.3858% ( 85) 00:08:45.133 10989.883 - 11040.295: 53.1166% ( 87) 00:08:45.133 11040.295 - 11090.708: 53.6962% ( 69) 00:08:45.133 11090.708 - 11141.120: 54.2339% ( 64) 00:08:45.133 11141.120 - 11191.532: 54.8303% ( 71) 00:08:45.133 11191.532 - 11241.945: 55.4519% ( 74) 00:08:45.133 11241.945 - 11292.357: 56.0820% ( 75) 00:08:45.133 11292.357 - 11342.769: 56.8296% ( 89) 00:08:45.133 11342.769 - 11393.182: 57.5773% ( 89) 00:08:45.133 11393.182 - 11443.594: 58.3081% ( 87) 00:08:45.133 11443.594 - 11494.006: 59.2490% ( 112) 00:08:45.133 11494.006 - 11544.418: 59.9798% ( 87) 00:08:45.133 11544.418 - 11594.831: 60.6687% ( 82) 00:08:45.133 11594.831 - 11645.243: 61.3071% ( 76) 00:08:45.133 11645.243 - 11695.655: 62.0212% ( 85) 00:08:45.133 11695.655 - 11746.068: 62.7352% ( 85) 00:08:45.133 11746.068 - 11796.480: 63.5081% ( 92) 00:08:45.133 11796.480 - 11846.892: 64.3649% ( 102) 00:08:45.133 11846.892 - 11897.305: 65.0370% ( 80) 00:08:45.133 11897.305 - 11947.717: 65.7342% ( 83) 00:08:45.133 11947.717 - 11998.129: 66.5407% ( 96) 00:08:45.133 11998.129 - 12048.542: 67.4059% ( 103) 00:08:45.133 12048.542 - 12098.954: 68.0780% ( 80) 00:08:45.133 12098.954 - 12149.366: 68.8340% ( 90) 00:08:45.133 12149.366 - 12199.778: 69.5985% ( 91) 00:08:45.133 12199.778 - 12250.191: 70.3629% ( 91) 00:08:45.133 12250.191 - 12300.603: 71.3290% ( 115) 00:08:45.133 12300.603 - 12351.015: 72.0262% ( 83) 00:08:45.133 12351.015 - 12401.428: 72.8075% ( 93) 00:08:45.133 12401.428 - 12451.840: 73.3955% ( 70) 00:08:45.133 12451.840 - 12502.252: 73.9835% ( 70) 00:08:45.133 12502.252 - 12552.665: 74.5548% ( 68) 00:08:45.133 12552.665 - 12603.077: 75.1764% ( 74) 00:08:45.133 12603.077 - 12653.489: 75.8317% ( 78) 00:08:45.133 12653.489 - 12703.902: 76.5541% ( 86) 00:08:45.133 12703.902 - 12754.314: 77.2093% ( 78) 00:08:45.133 12754.314 - 12804.726: 77.9150% ( 84) 00:08:45.133 12804.726 - 12855.138: 78.5030% ( 70) 00:08:45.133 12855.138 - 12905.551: 79.2675% ( 91) 00:08:45.133 12905.551 - 13006.375: 80.7712% ( 179) 00:08:45.133 13006.375 - 13107.200: 82.0312% ( 150) 00:08:45.133 13107.200 - 13208.025: 83.4173% ( 165) 00:08:45.133 13208.025 - 13308.849: 84.5010% ( 129) 00:08:45.133 13308.849 - 13409.674: 85.6939% ( 142) 00:08:45.133 13409.674 - 13510.498: 86.8616% ( 139) 00:08:45.133 13510.498 - 13611.323: 87.8444% ( 117) 00:08:45.133 13611.323 - 13712.148: 89.1381% ( 154) 00:08:45.133 13712.148 - 13812.972: 90.0790% ( 112) 00:08:45.133 13812.972 - 13913.797: 90.8182% ( 88) 00:08:45.133 13913.797 - 14014.622: 91.6079% ( 94) 00:08:45.133 14014.622 - 14115.446: 92.0615% ( 54) 00:08:45.133 14115.446 - 14216.271: 92.3975% ( 40) 00:08:45.133 14216.271 - 14317.095: 92.7335% ( 40) 00:08:45.133 14317.095 - 14417.920: 93.0444% ( 37) 00:08:45.133 14417.920 - 14518.745: 93.3048% ( 31) 00:08:45.133 14518.745 - 14619.569: 93.5316% ( 27) 00:08:45.133 14619.569 - 14720.394: 93.6576% ( 15) 00:08:45.133 14720.394 - 14821.218: 93.8004% ( 17) 00:08:45.133 14821.218 - 14922.043: 94.0188% ( 26) 00:08:45.133 14922.043 - 15022.868: 94.2624% ( 29) 00:08:45.133 15022.868 - 15123.692: 94.5144% ( 30) 00:08:45.133 15123.692 - 15224.517: 94.7665% ( 30) 00:08:45.133 15224.517 - 15325.342: 94.9765% ( 25) 00:08:45.133 15325.342 - 15426.166: 95.1949% ( 26) 00:08:45.133 15426.166 - 15526.991: 95.4721% ( 33) 00:08:45.133 15526.991 - 15627.815: 95.6653% ( 23) 00:08:45.133 15627.815 - 15728.640: 95.8417% ( 21) 00:08:45.133 15728.640 - 15829.465: 96.0265% ( 22) 00:08:45.133 15829.465 - 15930.289: 96.2198% ( 23) 00:08:45.133 15930.289 - 16031.114: 96.4214% ( 24) 00:08:45.133 16031.114 - 16131.938: 96.6146% ( 23) 00:08:45.133 16131.938 - 16232.763: 96.8330% ( 26) 00:08:45.133 16232.763 - 16333.588: 96.9926% ( 19) 00:08:45.133 16333.588 - 16434.412: 97.1774% ( 22) 00:08:45.133 16434.412 - 16535.237: 97.3202% ( 17) 00:08:45.133 16535.237 - 16636.062: 97.4630% ( 17) 00:08:45.133 16636.062 - 16736.886: 97.5722% ( 13) 00:08:45.133 16736.886 - 16837.711: 97.6478% ( 9) 00:08:45.133 16837.711 - 16938.535: 97.7235% ( 9) 00:08:45.133 16938.535 - 17039.360: 97.7487% ( 3) 00:08:45.133 17039.360 - 17140.185: 97.7907% ( 5) 00:08:45.133 17140.185 - 17241.009: 97.8243% ( 4) 00:08:45.133 17241.009 - 17341.834: 97.8495% ( 3) 00:08:45.133 17644.308 - 17745.132: 97.8999% ( 6) 00:08:45.133 17745.132 - 17845.957: 97.9503% ( 6) 00:08:45.133 17845.957 - 17946.782: 98.0007% ( 6) 00:08:45.133 17946.782 - 18047.606: 98.0595% ( 7) 00:08:45.133 18047.606 - 18148.431: 98.1603% ( 12) 00:08:45.133 18148.431 - 18249.255: 98.3283% ( 20) 00:08:45.133 18249.255 - 18350.080: 98.4711% ( 17) 00:08:45.133 18350.080 - 18450.905: 98.5719% ( 12) 00:08:45.133 18450.905 - 18551.729: 98.6559% ( 10) 00:08:45.133 18551.729 - 18652.554: 98.7483% ( 11) 00:08:45.133 18652.554 - 18753.378: 98.8323% ( 10) 00:08:45.133 18753.378 - 18854.203: 98.9415% ( 13) 00:08:45.133 18854.203 - 18955.028: 99.0423% ( 12) 00:08:45.133 18955.028 - 19055.852: 99.1431% ( 12) 00:08:45.133 19055.852 - 19156.677: 99.2440% ( 12) 00:08:45.133 19156.677 - 19257.502: 99.3364% ( 11) 00:08:45.133 19257.502 - 19358.326: 99.4288% ( 11) 00:08:45.133 19358.326 - 19459.151: 99.5296% ( 12) 00:08:45.133 19459.151 - 19559.975: 99.6220% ( 11) 00:08:45.133 19559.975 - 19660.800: 99.7228% ( 12) 00:08:45.133 19660.800 - 19761.625: 99.7984% ( 9) 00:08:45.133 19761.625 - 19862.449: 99.8404% ( 5) 00:08:45.133 19862.449 - 19963.274: 99.8740% ( 4) 00:08:45.133 19963.274 - 20064.098: 99.9160% ( 5) 00:08:45.133 20064.098 - 20164.923: 99.9580% ( 5) 00:08:45.133 20164.923 - 20265.748: 99.9916% ( 4) 00:08:45.133 20265.748 - 20366.572: 100.0000% ( 1) 00:08:45.133 00:08:45.133 04:04:46 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:45.133 00:08:45.133 real 0m2.475s 00:08:45.133 user 0m2.180s 00:08:45.133 sys 0m0.186s 00:08:45.133 04:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.133 04:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.133 ************************************ 00:08:45.133 END TEST nvme_perf 00:08:45.133 ************************************ 00:08:45.133 04:04:46 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:45.133 04:04:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:45.133 04:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.133 04:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.133 ************************************ 00:08:45.133 START TEST nvme_hello_world 00:08:45.134 ************************************ 00:08:45.134 04:04:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:45.134 Initializing NVMe Controllers 00:08:45.134 Attached to 0000:00:06.0 00:08:45.134 Namespace ID: 1 size: 6GB 00:08:45.134 Attached to 0000:00:07.0 00:08:45.134 Namespace ID: 1 size: 5GB 00:08:45.134 Attached to 0000:00:09.0 00:08:45.134 Namespace ID: 1 size: 1GB 00:08:45.134 Attached to 0000:00:08.0 00:08:45.134 Namespace ID: 1 size: 4GB 00:08:45.134 Namespace ID: 2 size: 4GB 00:08:45.134 Namespace ID: 3 size: 4GB 00:08:45.134 Initialization complete. 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 INFO: using host memory buffer for IO 00:08:45.134 Hello world! 00:08:45.134 ************************************ 00:08:45.134 END TEST nvme_hello_world 00:08:45.134 ************************************ 00:08:45.134 00:08:45.134 real 0m0.204s 00:08:45.134 user 0m0.070s 00:08:45.134 sys 0m0.090s 00:08:45.134 04:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.134 04:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.134 04:04:46 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:45.134 04:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.134 04:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.134 04:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:45.134 ************************************ 00:08:45.134 START TEST nvme_sgl 00:08:45.134 ************************************ 00:08:45.134 04:04:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:45.395 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:08:45.395 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:08:45.395 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:08:45.395 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:08:45.395 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:08:45.395 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:08:45.395 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:08:45.395 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:08:45.396 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:08:45.396 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:08:45.396 NVMe Readv/Writev Request test 00:08:45.396 Attached to 0000:00:06.0 00:08:45.396 Attached to 0000:00:07.0 00:08:45.396 Attached to 0000:00:09.0 00:08:45.396 Attached to 0000:00:08.0 00:08:45.396 0000:00:06.0: build_io_request_2 test passed 00:08:45.396 0000:00:06.0: build_io_request_4 test passed 00:08:45.396 0000:00:06.0: build_io_request_5 test passed 00:08:45.396 0000:00:06.0: build_io_request_6 test passed 00:08:45.396 0000:00:06.0: build_io_request_7 test passed 00:08:45.396 0000:00:06.0: build_io_request_10 test passed 00:08:45.396 0000:00:07.0: build_io_request_2 test passed 00:08:45.396 0000:00:07.0: build_io_request_4 test passed 00:08:45.396 0000:00:07.0: build_io_request_5 test passed 00:08:45.396 0000:00:07.0: build_io_request_6 test passed 00:08:45.396 0000:00:07.0: build_io_request_7 test passed 00:08:45.396 0000:00:07.0: build_io_request_10 test passed 00:08:45.396 Cleaning up... 00:08:45.396 ************************************ 00:08:45.396 END TEST nvme_sgl 00:08:45.396 ************************************ 00:08:45.396 00:08:45.396 real 0m0.274s 00:08:45.396 user 0m0.146s 00:08:45.396 sys 0m0.079s 00:08:45.396 04:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.396 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.396 04:04:47 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:45.396 04:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.396 04:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.396 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.657 ************************************ 00:08:45.657 START TEST nvme_e2edp 00:08:45.657 ************************************ 00:08:45.657 04:04:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:45.657 NVMe Write/Read with End-to-End data protection test 00:08:45.657 Attached to 0000:00:06.0 00:08:45.657 Attached to 0000:00:07.0 00:08:45.657 Attached to 0000:00:09.0 00:08:45.657 Attached to 0000:00:08.0 00:08:45.657 Cleaning up... 00:08:45.657 00:08:45.657 real 0m0.181s 00:08:45.657 user 0m0.045s 00:08:45.657 sys 0m0.091s 00:08:45.657 04:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.657 ************************************ 00:08:45.657 END TEST nvme_e2edp 00:08:45.657 ************************************ 00:08:45.657 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.657 04:04:47 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.657 04:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.657 04:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.658 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.658 ************************************ 00:08:45.658 START TEST nvme_reserve 00:08:45.658 ************************************ 00:08:45.658 04:04:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:45.919 ===================================================== 00:08:45.919 NVMe Controller at PCI bus 0, device 6, function 0 00:08:45.919 ===================================================== 00:08:45.919 Reservations: Not Supported 00:08:45.919 ===================================================== 00:08:45.919 NVMe Controller at PCI bus 0, device 7, function 0 00:08:45.919 ===================================================== 00:08:45.919 Reservations: Not Supported 00:08:45.919 ===================================================== 00:08:45.919 NVMe Controller at PCI bus 0, device 9, function 0 00:08:45.919 ===================================================== 00:08:45.919 Reservations: Not Supported 00:08:45.919 ===================================================== 00:08:45.919 NVMe Controller at PCI bus 0, device 8, function 0 00:08:45.919 ===================================================== 00:08:45.919 Reservations: Not Supported 00:08:45.919 Reservation test passed 00:08:45.919 ************************************ 00:08:45.919 END TEST nvme_reserve 00:08:45.919 ************************************ 00:08:45.919 00:08:45.919 real 0m0.195s 00:08:45.919 user 0m0.052s 00:08:45.919 sys 0m0.085s 00:08:45.919 04:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.919 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.919 04:04:47 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:45.919 04:04:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.919 04:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.919 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:45.919 ************************************ 00:08:45.919 START TEST nvme_err_injection 00:08:45.919 ************************************ 00:08:45.919 04:04:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:46.180 NVMe Error Injection test 00:08:46.180 Attached to 0000:00:06.0 00:08:46.180 Attached to 0000:00:07.0 00:08:46.180 Attached to 0000:00:09.0 00:08:46.180 Attached to 0000:00:08.0 00:08:46.180 0000:00:09.0: get features failed as expected 00:08:46.180 0000:00:08.0: get features failed as expected 00:08:46.180 0000:00:06.0: get features failed as expected 00:08:46.180 0000:00:07.0: get features failed as expected 00:08:46.180 0000:00:06.0: get features successfully as expected 00:08:46.180 0000:00:07.0: get features successfully as expected 00:08:46.180 0000:00:09.0: get features successfully as expected 00:08:46.180 0000:00:08.0: get features successfully as expected 00:08:46.180 0000:00:06.0: read failed as expected 00:08:46.180 0000:00:08.0: read failed as expected 00:08:46.180 0000:00:07.0: read failed as expected 00:08:46.180 0000:00:09.0: read failed as expected 00:08:46.180 0000:00:08.0: read successfully as expected 00:08:46.180 0000:00:06.0: read successfully as expected 00:08:46.180 0000:00:07.0: read successfully as expected 00:08:46.180 0000:00:09.0: read successfully as expected 00:08:46.180 Cleaning up... 00:08:46.180 ************************************ 00:08:46.180 END TEST nvme_err_injection 00:08:46.180 ************************************ 00:08:46.180 00:08:46.180 real 0m0.202s 00:08:46.180 user 0m0.068s 00:08:46.180 sys 0m0.087s 00:08:46.180 04:04:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.180 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:46.180 04:04:47 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:46.180 04:04:47 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:46.180 04:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.180 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:46.180 ************************************ 00:08:46.180 START TEST nvme_overhead 00:08:46.180 ************************************ 00:08:46.180 04:04:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:47.570 Initializing NVMe Controllers 00:08:47.570 Attached to 0000:00:06.0 00:08:47.570 Attached to 0000:00:07.0 00:08:47.570 Attached to 0000:00:09.0 00:08:47.570 Attached to 0000:00:08.0 00:08:47.570 Initialization complete. Launching workers. 00:08:47.570 submit (in ns) avg, min, max = 12908.3, 9780.0, 103163.8 00:08:47.570 complete (in ns) avg, min, max = 8903.2, 7254.6, 407484.6 00:08:47.570 00:08:47.570 Submit histogram 00:08:47.570 ================ 00:08:47.570 Range in us Cumulative Count 00:08:47.570 9.748 - 9.797: 0.0282% ( 1) 00:08:47.570 10.191 - 10.240: 0.0565% ( 1) 00:08:47.570 10.437 - 10.486: 0.0847% ( 1) 00:08:47.570 10.585 - 10.634: 0.2260% ( 5) 00:08:47.570 10.634 - 10.683: 0.2542% ( 1) 00:08:47.570 10.732 - 10.782: 0.3390% ( 3) 00:08:47.570 10.782 - 10.831: 0.4520% ( 4) 00:08:47.570 10.831 - 10.880: 0.6780% ( 8) 00:08:47.570 10.880 - 10.929: 0.9322% ( 9) 00:08:47.570 10.929 - 10.978: 1.6384% ( 25) 00:08:47.570 10.978 - 11.028: 2.5141% ( 31) 00:08:47.570 11.028 - 11.077: 4.3220% ( 64) 00:08:47.570 11.077 - 11.126: 7.4576% ( 111) 00:08:47.570 11.126 - 11.175: 11.4124% ( 140) 00:08:47.570 11.175 - 11.225: 17.4576% ( 214) 00:08:47.570 11.225 - 11.274: 24.8588% ( 262) 00:08:47.570 11.274 - 11.323: 34.1243% ( 328) 00:08:47.570 11.323 - 11.372: 41.4972% ( 261) 00:08:47.570 11.372 - 11.422: 48.7006% ( 255) 00:08:47.570 11.422 - 11.471: 53.3051% ( 163) 00:08:47.570 11.471 - 11.520: 56.9492% ( 129) 00:08:47.570 11.520 - 11.569: 60.2825% ( 118) 00:08:47.570 11.569 - 11.618: 63.0226% ( 97) 00:08:47.570 11.618 - 11.668: 65.7345% ( 96) 00:08:47.570 11.668 - 11.717: 67.7119% ( 70) 00:08:47.570 11.717 - 11.766: 69.3785% ( 59) 00:08:47.570 11.766 - 11.815: 70.7910% ( 50) 00:08:47.570 11.815 - 11.865: 72.2316% ( 51) 00:08:47.570 11.865 - 11.914: 73.4746% ( 44) 00:08:47.570 11.914 - 11.963: 74.1525% ( 24) 00:08:47.570 11.963 - 12.012: 74.8588% ( 25) 00:08:47.570 12.012 - 12.062: 75.5367% ( 24) 00:08:47.570 12.062 - 12.111: 76.0169% ( 17) 00:08:47.570 12.111 - 12.160: 76.4689% ( 16) 00:08:47.570 12.160 - 12.209: 76.7797% ( 11) 00:08:47.570 12.209 - 12.258: 76.9209% ( 5) 00:08:47.570 12.258 - 12.308: 77.2034% ( 10) 00:08:47.570 12.308 - 12.357: 77.6836% ( 17) 00:08:47.570 12.357 - 12.406: 77.8249% ( 5) 00:08:47.570 12.406 - 12.455: 77.9096% ( 3) 00:08:47.570 12.455 - 12.505: 78.1638% ( 9) 00:08:47.570 12.505 - 12.554: 78.3051% ( 5) 00:08:47.570 12.554 - 12.603: 78.4181% ( 4) 00:08:47.570 12.603 - 12.702: 78.8418% ( 15) 00:08:47.570 12.702 - 12.800: 79.3503% ( 18) 00:08:47.570 12.800 - 12.898: 79.6328% ( 10) 00:08:47.570 12.898 - 12.997: 80.1130% ( 17) 00:08:47.570 12.997 - 13.095: 80.4237% ( 11) 00:08:47.570 13.095 - 13.194: 80.6780% ( 9) 00:08:47.570 13.194 - 13.292: 81.1864% ( 18) 00:08:47.570 13.292 - 13.391: 81.5254% ( 12) 00:08:47.570 13.391 - 13.489: 81.9774% ( 16) 00:08:47.570 13.489 - 13.588: 82.3164% ( 12) 00:08:47.570 13.588 - 13.686: 82.7966% ( 17) 00:08:47.570 13.686 - 13.785: 83.2486% ( 16) 00:08:47.570 13.785 - 13.883: 83.5028% ( 9) 00:08:47.570 13.883 - 13.982: 83.7571% ( 9) 00:08:47.570 13.982 - 14.080: 84.0113% ( 9) 00:08:47.570 14.080 - 14.178: 84.3503% ( 12) 00:08:47.570 14.178 - 14.277: 84.6610% ( 11) 00:08:47.570 14.277 - 14.375: 84.9718% ( 11) 00:08:47.570 14.375 - 14.474: 85.2542% ( 10) 00:08:47.570 14.474 - 14.572: 85.4520% ( 7) 00:08:47.570 14.572 - 14.671: 85.5650% ( 4) 00:08:47.570 14.671 - 14.769: 85.7910% ( 8) 00:08:47.570 14.769 - 14.868: 85.9322% ( 5) 00:08:47.570 14.868 - 14.966: 86.1864% ( 9) 00:08:47.570 14.966 - 15.065: 86.3842% ( 7) 00:08:47.570 15.065 - 15.163: 86.5819% ( 7) 00:08:47.570 15.163 - 15.262: 86.7797% ( 7) 00:08:47.570 15.262 - 15.360: 86.8644% ( 3) 00:08:47.570 15.360 - 15.458: 87.0339% ( 6) 00:08:47.570 15.458 - 15.557: 87.2599% ( 8) 00:08:47.570 15.557 - 15.655: 87.4576% ( 7) 00:08:47.570 15.655 - 15.754: 87.6836% ( 8) 00:08:47.570 15.754 - 15.852: 87.9379% ( 9) 00:08:47.570 15.852 - 15.951: 88.0791% ( 5) 00:08:47.570 15.951 - 16.049: 88.2203% ( 5) 00:08:47.570 16.049 - 16.148: 88.5593% ( 12) 00:08:47.570 16.148 - 16.246: 88.8983% ( 12) 00:08:47.570 16.246 - 16.345: 89.1525% ( 9) 00:08:47.570 16.345 - 16.443: 89.4350% ( 10) 00:08:47.570 16.443 - 16.542: 89.9718% ( 19) 00:08:47.570 16.542 - 16.640: 90.5650% ( 21) 00:08:47.570 16.640 - 16.738: 90.9887% ( 15) 00:08:47.570 16.738 - 16.837: 91.3842% ( 14) 00:08:47.570 16.837 - 16.935: 91.8079% ( 15) 00:08:47.570 16.935 - 17.034: 92.0904% ( 10) 00:08:47.570 17.034 - 17.132: 92.4576% ( 13) 00:08:47.570 17.132 - 17.231: 92.6836% ( 8) 00:08:47.570 17.231 - 17.329: 92.9379% ( 9) 00:08:47.570 17.329 - 17.428: 93.1073% ( 6) 00:08:47.570 17.428 - 17.526: 93.3616% ( 9) 00:08:47.570 17.526 - 17.625: 93.4746% ( 4) 00:08:47.570 17.625 - 17.723: 93.5876% ( 4) 00:08:47.570 17.723 - 17.822: 93.7853% ( 7) 00:08:47.570 17.822 - 17.920: 93.8983% ( 4) 00:08:47.570 17.920 - 18.018: 94.0395% ( 5) 00:08:47.570 18.018 - 18.117: 94.1808% ( 5) 00:08:47.570 18.117 - 18.215: 94.2655% ( 3) 00:08:47.571 18.215 - 18.314: 94.2938% ( 1) 00:08:47.571 18.314 - 18.412: 94.3503% ( 2) 00:08:47.571 18.412 - 18.511: 94.5198% ( 6) 00:08:47.571 18.511 - 18.609: 94.6893% ( 6) 00:08:47.571 18.609 - 18.708: 94.8588% ( 6) 00:08:47.571 18.708 - 18.806: 95.0565% ( 7) 00:08:47.571 18.806 - 18.905: 95.2260% ( 6) 00:08:47.571 18.905 - 19.003: 95.4237% ( 7) 00:08:47.571 19.003 - 19.102: 95.5650% ( 5) 00:08:47.571 19.102 - 19.200: 95.7345% ( 6) 00:08:47.571 19.200 - 19.298: 95.8757% ( 5) 00:08:47.571 19.298 - 19.397: 96.0169% ( 5) 00:08:47.571 19.397 - 19.495: 96.1582% ( 5) 00:08:47.571 19.495 - 19.594: 96.3277% ( 6) 00:08:47.571 19.594 - 19.692: 96.4972% ( 6) 00:08:47.571 19.692 - 19.791: 96.6384% ( 5) 00:08:47.571 19.791 - 19.889: 96.7232% ( 3) 00:08:47.571 19.889 - 19.988: 96.7797% ( 2) 00:08:47.571 19.988 - 20.086: 96.8362% ( 2) 00:08:47.571 20.086 - 20.185: 96.9209% ( 3) 00:08:47.571 20.185 - 20.283: 96.9492% ( 1) 00:08:47.571 20.283 - 20.382: 97.0621% ( 4) 00:08:47.571 20.578 - 20.677: 97.1186% ( 2) 00:08:47.571 20.775 - 20.874: 97.1469% ( 1) 00:08:47.571 20.874 - 20.972: 97.1751% ( 1) 00:08:47.571 20.972 - 21.071: 97.2599% ( 3) 00:08:47.571 21.071 - 21.169: 97.3164% ( 2) 00:08:47.571 21.169 - 21.268: 97.3446% ( 1) 00:08:47.571 21.465 - 21.563: 97.3729% ( 1) 00:08:47.571 21.563 - 21.662: 97.4011% ( 1) 00:08:47.571 21.662 - 21.760: 97.4576% ( 2) 00:08:47.571 21.858 - 21.957: 97.4859% ( 1) 00:08:47.571 21.957 - 22.055: 97.5141% ( 1) 00:08:47.571 22.843 - 22.942: 97.5424% ( 1) 00:08:47.571 22.942 - 23.040: 97.5706% ( 1) 00:08:47.571 24.123 - 24.222: 97.5989% ( 1) 00:08:47.571 24.222 - 24.320: 97.6271% ( 1) 00:08:47.571 25.403 - 25.600: 97.6554% ( 1) 00:08:47.571 26.585 - 26.782: 97.7119% ( 2) 00:08:47.571 26.782 - 26.978: 97.7401% ( 1) 00:08:47.571 27.175 - 27.372: 97.7966% ( 2) 00:08:47.571 27.372 - 27.569: 97.8531% ( 2) 00:08:47.571 27.569 - 27.766: 97.9379% ( 3) 00:08:47.571 27.766 - 27.963: 98.0226% ( 3) 00:08:47.571 27.963 - 28.160: 98.0791% ( 2) 00:08:47.571 28.160 - 28.357: 98.1356% ( 2) 00:08:47.571 28.357 - 28.554: 98.1921% ( 2) 00:08:47.571 28.554 - 28.751: 98.2486% ( 2) 00:08:47.571 28.751 - 28.948: 98.3051% ( 2) 00:08:47.571 30.129 - 30.326: 98.3898% ( 3) 00:08:47.571 30.326 - 30.523: 98.4463% ( 2) 00:08:47.571 30.523 - 30.720: 98.5876% ( 5) 00:08:47.571 30.720 - 30.917: 98.7006% ( 4) 00:08:47.571 30.917 - 31.114: 98.8136% ( 4) 00:08:47.571 31.114 - 31.311: 98.8701% ( 2) 00:08:47.571 31.311 - 31.508: 98.9548% ( 3) 00:08:47.571 31.508 - 31.705: 99.0395% ( 3) 00:08:47.571 31.705 - 31.902: 99.1525% ( 4) 00:08:47.571 31.902 - 32.098: 99.1808% ( 1) 00:08:47.571 32.295 - 32.492: 99.2373% ( 2) 00:08:47.571 32.689 - 32.886: 99.2655% ( 1) 00:08:47.571 32.886 - 33.083: 99.3220% ( 2) 00:08:47.571 33.280 - 33.477: 99.3503% ( 1) 00:08:47.571 33.477 - 33.674: 99.4068% ( 2) 00:08:47.571 33.674 - 33.871: 99.4350% ( 1) 00:08:47.571 36.234 - 36.431: 99.4633% ( 1) 00:08:47.571 40.763 - 40.960: 99.4915% ( 1) 00:08:47.571 44.111 - 44.308: 99.5198% ( 1) 00:08:47.571 47.262 - 47.458: 99.5480% ( 1) 00:08:47.571 48.049 - 48.246: 99.5763% ( 1) 00:08:47.571 50.215 - 50.412: 99.6045% ( 1) 00:08:47.571 50.806 - 51.200: 99.6328% ( 1) 00:08:47.571 53.563 - 53.957: 99.6610% ( 1) 00:08:47.571 56.320 - 56.714: 99.6893% ( 1) 00:08:47.571 56.714 - 57.108: 99.7175% ( 1) 00:08:47.571 57.502 - 57.895: 99.7458% ( 1) 00:08:47.571 59.077 - 59.471: 99.7740% ( 1) 00:08:47.571 60.258 - 60.652: 99.8023% ( 1) 00:08:47.571 70.498 - 70.892: 99.8305% ( 1) 00:08:47.571 71.680 - 72.074: 99.8588% ( 1) 00:08:47.571 77.588 - 77.982: 99.8870% ( 1) 00:08:47.571 79.163 - 79.557: 99.9153% ( 1) 00:08:47.571 92.554 - 92.948: 99.9435% ( 1) 00:08:47.571 99.249 - 99.643: 99.9718% ( 1) 00:08:47.571 102.400 - 103.188: 100.0000% ( 1) 00:08:47.571 00:08:47.571 Complete histogram 00:08:47.571 ================== 00:08:47.571 Range in us Cumulative Count 00:08:47.571 7.237 - 7.286: 0.3390% ( 12) 00:08:47.571 7.286 - 7.335: 1.7232% ( 49) 00:08:47.571 7.335 - 7.385: 8.3898% ( 236) 00:08:47.571 7.385 - 7.434: 23.2203% ( 525) 00:08:47.571 7.434 - 7.483: 40.0282% ( 595) 00:08:47.571 7.483 - 7.532: 52.9944% ( 459) 00:08:47.571 7.532 - 7.582: 62.4576% ( 335) 00:08:47.571 7.582 - 7.631: 69.2655% ( 241) 00:08:47.571 7.631 - 7.680: 73.1638% ( 138) 00:08:47.571 7.680 - 7.729: 75.9322% ( 98) 00:08:47.571 7.729 - 7.778: 77.5141% ( 56) 00:08:47.571 7.778 - 7.828: 78.5311% ( 36) 00:08:47.571 7.828 - 7.877: 79.3220% ( 28) 00:08:47.571 7.877 - 7.926: 79.8305% ( 18) 00:08:47.571 7.926 - 7.975: 80.3390% ( 18) 00:08:47.571 7.975 - 8.025: 80.9040% ( 20) 00:08:47.571 8.025 - 8.074: 81.3842% ( 17) 00:08:47.571 8.074 - 8.123: 81.9774% ( 21) 00:08:47.571 8.123 - 8.172: 82.5424% ( 20) 00:08:47.571 8.172 - 8.222: 82.9379% ( 14) 00:08:47.571 8.222 - 8.271: 83.4181% ( 17) 00:08:47.571 8.271 - 8.320: 83.8418% ( 15) 00:08:47.571 8.320 - 8.369: 84.1243% ( 10) 00:08:47.571 8.369 - 8.418: 84.3220% ( 7) 00:08:47.571 8.418 - 8.468: 84.5198% ( 7) 00:08:47.571 8.468 - 8.517: 84.7458% ( 8) 00:08:47.571 8.517 - 8.566: 84.9153% ( 6) 00:08:47.571 8.566 - 8.615: 85.0565% ( 5) 00:08:47.571 8.615 - 8.665: 85.2542% ( 7) 00:08:47.571 8.665 - 8.714: 85.2825% ( 1) 00:08:47.571 8.714 - 8.763: 85.4237% ( 5) 00:08:47.571 8.763 - 8.812: 85.5085% ( 3) 00:08:47.571 8.812 - 8.862: 85.5367% ( 1) 00:08:47.571 8.862 - 8.911: 85.6215% ( 3) 00:08:47.571 8.911 - 8.960: 85.6497% ( 1) 00:08:47.571 8.960 - 9.009: 85.7627% ( 4) 00:08:47.571 9.009 - 9.058: 85.8192% ( 2) 00:08:47.571 9.108 - 9.157: 85.8757% ( 2) 00:08:47.571 9.206 - 9.255: 85.9040% ( 1) 00:08:47.571 9.255 - 9.305: 85.9887% ( 3) 00:08:47.571 9.305 - 9.354: 86.1582% ( 6) 00:08:47.571 9.354 - 9.403: 86.2429% ( 3) 00:08:47.571 9.403 - 9.452: 86.3277% ( 3) 00:08:47.571 9.502 - 9.551: 86.3559% ( 1) 00:08:47.571 9.551 - 9.600: 86.4124% ( 2) 00:08:47.571 9.600 - 9.649: 86.4689% ( 2) 00:08:47.571 9.649 - 9.698: 86.5254% ( 2) 00:08:47.571 9.698 - 9.748: 86.6384% ( 4) 00:08:47.571 9.748 - 9.797: 86.6667% ( 1) 00:08:47.571 9.846 - 9.895: 86.7232% ( 2) 00:08:47.571 9.895 - 9.945: 86.7514% ( 1) 00:08:47.571 9.945 - 9.994: 86.7797% ( 1) 00:08:47.571 9.994 - 10.043: 86.8079% ( 1) 00:08:47.571 10.092 - 10.142: 86.8362% ( 1) 00:08:47.571 10.191 - 10.240: 86.8927% ( 2) 00:08:47.571 10.289 - 10.338: 86.9492% ( 2) 00:08:47.571 10.338 - 10.388: 87.0056% ( 2) 00:08:47.571 10.388 - 10.437: 87.0621% ( 2) 00:08:47.571 10.437 - 10.486: 87.2034% ( 5) 00:08:47.571 10.535 - 10.585: 87.2599% ( 2) 00:08:47.571 10.585 - 10.634: 87.2881% ( 1) 00:08:47.571 10.634 - 10.683: 87.3729% ( 3) 00:08:47.571 10.683 - 10.732: 87.4294% ( 2) 00:08:47.571 10.732 - 10.782: 87.5424% ( 4) 00:08:47.571 10.880 - 10.929: 87.6554% ( 4) 00:08:47.572 10.929 - 10.978: 87.7684% ( 4) 00:08:47.572 10.978 - 11.028: 87.8814% ( 4) 00:08:47.572 11.028 - 11.077: 87.9379% ( 2) 00:08:47.572 11.077 - 11.126: 88.1073% ( 6) 00:08:47.572 11.126 - 11.175: 88.3333% ( 8) 00:08:47.572 11.175 - 11.225: 88.5311% ( 7) 00:08:47.572 11.225 - 11.274: 88.6441% ( 4) 00:08:47.572 11.274 - 11.323: 88.8136% ( 6) 00:08:47.572 11.323 - 11.372: 88.9266% ( 4) 00:08:47.572 11.372 - 11.422: 89.0960% ( 6) 00:08:47.572 11.422 - 11.471: 89.1808% ( 3) 00:08:47.572 11.471 - 11.520: 89.3220% ( 5) 00:08:47.572 11.520 - 11.569: 89.4350% ( 4) 00:08:47.572 11.569 - 11.618: 89.5198% ( 3) 00:08:47.572 11.618 - 11.668: 89.6045% ( 3) 00:08:47.572 11.668 - 11.717: 89.7740% ( 6) 00:08:47.572 11.717 - 11.766: 90.0282% ( 9) 00:08:47.572 11.766 - 11.815: 90.3107% ( 10) 00:08:47.572 11.815 - 11.865: 90.5932% ( 10) 00:08:47.572 11.865 - 11.914: 90.8192% ( 8) 00:08:47.572 11.914 - 11.963: 91.0734% ( 9) 00:08:47.572 11.963 - 12.012: 91.3559% ( 10) 00:08:47.572 12.012 - 12.062: 91.6949% ( 12) 00:08:47.572 12.062 - 12.111: 91.9774% ( 10) 00:08:47.572 12.111 - 12.160: 92.5424% ( 20) 00:08:47.572 12.160 - 12.209: 92.9944% ( 16) 00:08:47.572 12.209 - 12.258: 93.2203% ( 8) 00:08:47.572 12.258 - 12.308: 93.5593% ( 12) 00:08:47.572 12.308 - 12.357: 94.0113% ( 16) 00:08:47.572 12.357 - 12.406: 94.2938% ( 10) 00:08:47.572 12.406 - 12.455: 94.6045% ( 11) 00:08:47.572 12.455 - 12.505: 94.8305% ( 8) 00:08:47.572 12.505 - 12.554: 95.0565% ( 8) 00:08:47.572 12.554 - 12.603: 95.1695% ( 4) 00:08:47.572 12.603 - 12.702: 95.3107% ( 5) 00:08:47.572 12.702 - 12.800: 95.3955% ( 3) 00:08:47.572 12.800 - 12.898: 95.4802% ( 3) 00:08:47.572 12.898 - 12.997: 95.5085% ( 1) 00:08:47.572 12.997 - 13.095: 95.6497% ( 5) 00:08:47.572 13.095 - 13.194: 95.8192% ( 6) 00:08:47.572 13.194 - 13.292: 95.9040% ( 3) 00:08:47.572 13.292 - 13.391: 96.0169% ( 4) 00:08:47.572 13.391 - 13.489: 96.0452% ( 1) 00:08:47.572 13.489 - 13.588: 96.1299% ( 3) 00:08:47.572 13.588 - 13.686: 96.2147% ( 3) 00:08:47.572 13.686 - 13.785: 96.3842% ( 6) 00:08:47.572 13.785 - 13.883: 96.4124% ( 1) 00:08:47.572 13.883 - 13.982: 96.4972% ( 3) 00:08:47.572 14.080 - 14.178: 96.5254% ( 1) 00:08:47.572 14.474 - 14.572: 96.5537% ( 1) 00:08:47.572 14.572 - 14.671: 96.5819% ( 1) 00:08:47.572 15.557 - 15.655: 96.6102% ( 1) 00:08:47.572 15.754 - 15.852: 96.6667% ( 2) 00:08:47.572 16.049 - 16.148: 96.6949% ( 1) 00:08:47.572 16.837 - 16.935: 96.7232% ( 1) 00:08:47.572 16.935 - 17.034: 96.7514% ( 1) 00:08:47.572 17.329 - 17.428: 96.8079% ( 2) 00:08:47.572 17.428 - 17.526: 96.8362% ( 1) 00:08:47.572 17.822 - 17.920: 96.8644% ( 1) 00:08:47.572 18.117 - 18.215: 96.9209% ( 2) 00:08:47.572 18.314 - 18.412: 96.9492% ( 1) 00:08:47.572 18.412 - 18.511: 96.9774% ( 1) 00:08:47.572 18.511 - 18.609: 97.0056% ( 1) 00:08:47.572 18.708 - 18.806: 97.0339% ( 1) 00:08:47.572 19.102 - 19.200: 97.0621% ( 1) 00:08:47.572 19.200 - 19.298: 97.1186% ( 2) 00:08:47.572 19.298 - 19.397: 97.1469% ( 1) 00:08:47.572 19.495 - 19.594: 97.1751% ( 1) 00:08:47.572 19.692 - 19.791: 97.2316% ( 2) 00:08:47.572 19.791 - 19.889: 97.2881% ( 2) 00:08:47.572 19.889 - 19.988: 97.3164% ( 1) 00:08:47.572 19.988 - 20.086: 97.3729% ( 2) 00:08:47.572 20.086 - 20.185: 97.4011% ( 1) 00:08:47.572 20.185 - 20.283: 97.4294% ( 1) 00:08:47.572 20.283 - 20.382: 97.5141% ( 3) 00:08:47.572 20.382 - 20.480: 97.5424% ( 1) 00:08:47.572 20.775 - 20.874: 97.5706% ( 1) 00:08:47.572 21.071 - 21.169: 97.5989% ( 1) 00:08:47.572 21.366 - 21.465: 97.6554% ( 2) 00:08:47.572 21.465 - 21.563: 97.6836% ( 1) 00:08:47.572 21.563 - 21.662: 97.7401% ( 2) 00:08:47.572 21.662 - 21.760: 97.7966% ( 2) 00:08:47.572 21.760 - 21.858: 97.8531% ( 2) 00:08:47.572 21.858 - 21.957: 97.9379% ( 3) 00:08:47.572 21.957 - 22.055: 98.0226% ( 3) 00:08:47.572 22.055 - 22.154: 98.0508% ( 1) 00:08:47.572 22.154 - 22.252: 98.0791% ( 1) 00:08:47.572 22.252 - 22.351: 98.3051% ( 8) 00:08:47.572 22.351 - 22.449: 98.3898% ( 3) 00:08:47.572 22.449 - 22.548: 98.4463% ( 2) 00:08:47.572 22.548 - 22.646: 98.6158% ( 6) 00:08:47.572 22.646 - 22.745: 98.7288% ( 4) 00:08:47.572 22.745 - 22.843: 98.7853% ( 2) 00:08:47.572 22.843 - 22.942: 98.8418% ( 2) 00:08:47.572 23.237 - 23.335: 98.9266% ( 3) 00:08:47.572 23.335 - 23.434: 98.9548% ( 1) 00:08:47.572 23.434 - 23.532: 98.9831% ( 1) 00:08:47.572 23.532 - 23.631: 99.0113% ( 1) 00:08:47.572 23.631 - 23.729: 99.0395% ( 1) 00:08:47.572 23.729 - 23.828: 99.0960% ( 2) 00:08:47.572 24.025 - 24.123: 99.1243% ( 1) 00:08:47.572 24.123 - 24.222: 99.1525% ( 1) 00:08:47.572 24.320 - 24.418: 99.2090% ( 2) 00:08:47.572 24.517 - 24.615: 99.2373% ( 1) 00:08:47.572 24.911 - 25.009: 99.2938% ( 2) 00:08:47.572 26.782 - 26.978: 99.3220% ( 1) 00:08:47.572 29.932 - 30.129: 99.3503% ( 1) 00:08:47.572 31.902 - 32.098: 99.3785% ( 1) 00:08:47.572 32.886 - 33.083: 99.4350% ( 2) 00:08:47.572 33.477 - 33.674: 99.4633% ( 1) 00:08:47.572 33.674 - 33.871: 99.4915% ( 1) 00:08:47.572 33.871 - 34.068: 99.5198% ( 1) 00:08:47.572 34.462 - 34.658: 99.5480% ( 1) 00:08:47.572 40.566 - 40.763: 99.5763% ( 1) 00:08:47.572 42.142 - 42.338: 99.6045% ( 1) 00:08:47.572 44.308 - 44.505: 99.6328% ( 1) 00:08:47.572 44.505 - 44.702: 99.6610% ( 1) 00:08:47.572 46.080 - 46.277: 99.6893% ( 1) 00:08:47.572 47.065 - 47.262: 99.7175% ( 1) 00:08:47.572 47.655 - 47.852: 99.7458% ( 1) 00:08:47.572 49.231 - 49.428: 99.7740% ( 1) 00:08:47.572 51.988 - 52.382: 99.8023% ( 1) 00:08:47.572 75.618 - 76.012: 99.8305% ( 1) 00:08:47.572 81.920 - 82.314: 99.8588% ( 1) 00:08:47.572 129.182 - 129.969: 99.8870% ( 1) 00:08:47.572 190.622 - 191.409: 99.9153% ( 1) 00:08:47.572 201.649 - 203.225: 99.9435% ( 1) 00:08:47.572 236.308 - 237.883: 99.9718% ( 1) 00:08:47.572 406.449 - 409.600: 100.0000% ( 1) 00:08:47.572 00:08:47.572 00:08:47.572 real 0m1.179s 00:08:47.572 user 0m1.056s 00:08:47.572 sys 0m0.083s 00:08:47.572 04:04:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.572 ************************************ 00:08:47.572 04:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:47.572 END TEST nvme_overhead 00:08:47.572 ************************************ 00:08:47.572 04:04:49 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:47.572 04:04:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:47.572 04:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.572 04:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:47.572 ************************************ 00:08:47.572 START TEST nvme_arbitration 00:08:47.572 ************************************ 00:08:47.572 04:04:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:50.872 Initializing NVMe Controllers 00:08:50.872 Attached to 0000:00:06.0 00:08:50.872 Attached to 0000:00:07.0 00:08:50.872 Attached to 0000:00:09.0 00:08:50.872 Attached to 0000:00:08.0 00:08:50.872 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:50.872 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:50.872 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:50.872 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:50.872 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:50.872 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:50.872 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:50.872 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:50.872 Initialization complete. Launching workers. 00:08:50.872 Starting thread on core 1 with urgent priority queue 00:08:50.872 Starting thread on core 2 with urgent priority queue 00:08:50.872 Starting thread on core 3 with urgent priority queue 00:08:50.872 Starting thread on core 0 with urgent priority queue 00:08:50.872 QEMU NVMe Ctrl (12340 ) core 0: 5482.67 IO/s 18.24 secs/100000 ios 00:08:50.872 QEMU NVMe Ctrl (12342 ) core 0: 5482.67 IO/s 18.24 secs/100000 ios 00:08:50.872 QEMU NVMe Ctrl (12341 ) core 1: 5568.00 IO/s 17.96 secs/100000 ios 00:08:50.872 QEMU NVMe Ctrl (12342 ) core 1: 5568.00 IO/s 17.96 secs/100000 ios 00:08:50.872 QEMU NVMe Ctrl (12343 ) core 2: 5269.33 IO/s 18.98 secs/100000 ios 00:08:50.872 QEMU NVMe Ctrl (12342 ) core 3: 5184.00 IO/s 19.29 secs/100000 ios 00:08:50.872 ======================================================== 00:08:50.872 00:08:50.872 ************************************ 00:08:50.872 END TEST nvme_arbitration 00:08:50.872 ************************************ 00:08:50.872 00:08:50.872 real 0m3.225s 00:08:50.872 user 0m9.017s 00:08:50.872 sys 0m0.119s 00:08:50.872 04:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.872 04:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:50.872 04:04:52 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:08:50.872 04:04:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:50.872 04:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.872 04:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:50.872 ************************************ 00:08:50.872 START TEST nvme_single_aen 00:08:50.872 ************************************ 00:08:50.872 04:04:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:08:50.872 [2024-11-26 04:04:52.502298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.872 [2024-11-26 04:04:52.502488] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.872 [2024-11-26 04:04:52.627028] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:50.872 [2024-11-26 04:04:52.628971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:50.872 [2024-11-26 04:04:52.630035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:50.872 [2024-11-26 04:04:52.630986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:51.131 Asynchronous Event Request test 00:08:51.131 Attached to 0000:00:06.0 00:08:51.131 Attached to 0000:00:07.0 00:08:51.131 Attached to 0000:00:09.0 00:08:51.131 Attached to 0000:00:08.0 00:08:51.131 Reset controller to setup AER completions for this process 00:08:51.131 Registering asynchronous event callbacks... 00:08:51.131 Getting orig temperature thresholds of all controllers 00:08:51.131 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.131 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.131 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.131 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.131 Setting all controllers temperature threshold low to trigger AER 00:08:51.131 Waiting for all controllers temperature threshold to be set lower 00:08:51.131 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.131 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:08:51.131 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.131 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:08:51.131 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.131 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:08:51.131 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.131 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:08:51.131 Waiting for all controllers to trigger AER and reset threshold 00:08:51.131 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.131 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.131 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.131 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.131 Cleaning up... 00:08:51.131 00:08:51.131 real 0m0.188s 00:08:51.131 user 0m0.060s 00:08:51.131 sys 0m0.082s 00:08:51.131 04:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.131 04:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:51.131 ************************************ 00:08:51.131 END TEST nvme_single_aen 00:08:51.131 ************************************ 00:08:51.131 04:04:52 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:51.131 04:04:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.131 04:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.131 04:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:51.131 ************************************ 00:08:51.131 START TEST nvme_doorbell_aers 00:08:51.131 ************************************ 00:08:51.131 04:04:52 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:08:51.131 04:04:52 -- nvme/nvme.sh@70 -- # bdfs=() 00:08:51.131 04:04:52 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:51.131 04:04:52 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:51.132 04:04:52 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:51.132 04:04:52 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:51.132 04:04:52 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:51.132 04:04:52 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.132 04:04:52 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:51.132 04:04:52 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:51.132 04:04:52 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:08:51.132 04:04:52 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:08:51.132 04:04:52 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:51.132 04:04:52 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:08:51.391 [2024-11-26 04:04:52.976869] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:01.416 Executing: test_write_invalid_db 00:09:01.416 Waiting for AER completion... 00:09:01.416 Failure: test_write_invalid_db 00:09:01.416 00:09:01.416 Executing: test_invalid_db_write_overflow_sq 00:09:01.416 Waiting for AER completion... 00:09:01.416 Failure: test_invalid_db_write_overflow_sq 00:09:01.416 00:09:01.416 Executing: test_invalid_db_write_overflow_cq 00:09:01.416 Waiting for AER completion... 00:09:01.416 Failure: test_invalid_db_write_overflow_cq 00:09:01.416 00:09:01.416 04:05:02 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:01.416 04:05:02 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:01.416 [2024-11-26 04:05:03.021394] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:11.402 Executing: test_write_invalid_db 00:09:11.402 Waiting for AER completion... 00:09:11.402 Failure: test_write_invalid_db 00:09:11.402 00:09:11.402 Executing: test_invalid_db_write_overflow_sq 00:09:11.402 Waiting for AER completion... 00:09:11.402 Failure: test_invalid_db_write_overflow_sq 00:09:11.402 00:09:11.402 Executing: test_invalid_db_write_overflow_cq 00:09:11.402 Waiting for AER completion... 00:09:11.402 Failure: test_invalid_db_write_overflow_cq 00:09:11.402 00:09:11.402 04:05:12 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:11.402 04:05:12 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:11.403 [2024-11-26 04:05:13.030102] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:21.366 Executing: test_write_invalid_db 00:09:21.366 Waiting for AER completion... 00:09:21.366 Failure: test_write_invalid_db 00:09:21.366 00:09:21.366 Executing: test_invalid_db_write_overflow_sq 00:09:21.366 Waiting for AER completion... 00:09:21.366 Failure: test_invalid_db_write_overflow_sq 00:09:21.366 00:09:21.366 Executing: test_invalid_db_write_overflow_cq 00:09:21.366 Waiting for AER completion... 00:09:21.366 Failure: test_invalid_db_write_overflow_cq 00:09:21.366 00:09:21.366 04:05:22 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.366 04:05:22 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:21.367 [2024-11-26 04:05:23.076664] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.336 Executing: test_write_invalid_db 00:09:31.336 Waiting for AER completion... 00:09:31.336 Failure: test_write_invalid_db 00:09:31.336 00:09:31.336 Executing: test_invalid_db_write_overflow_sq 00:09:31.336 Waiting for AER completion... 00:09:31.336 Failure: test_invalid_db_write_overflow_sq 00:09:31.336 00:09:31.336 Executing: test_invalid_db_write_overflow_cq 00:09:31.336 Waiting for AER completion... 00:09:31.336 Failure: test_invalid_db_write_overflow_cq 00:09:31.336 00:09:31.336 00:09:31.336 real 0m40.200s 00:09:31.336 user 0m34.132s 00:09:31.336 sys 0m5.692s 00:09:31.336 04:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.336 04:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:31.336 ************************************ 00:09:31.336 END TEST nvme_doorbell_aers 00:09:31.336 ************************************ 00:09:31.336 04:05:32 -- nvme/nvme.sh@97 -- # uname 00:09:31.336 04:05:32 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:31.336 04:05:32 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:31.336 04:05:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:31.336 04:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.336 04:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:31.336 ************************************ 00:09:31.336 START TEST nvme_multi_aen 00:09:31.336 ************************************ 00:09:31.336 04:05:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:31.336 [2024-11-26 04:05:32.989459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.336 [2024-11-26 04:05:32.989633] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.595 [2024-11-26 04:05:33.099405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:31.595 [2024-11-26 04:05:33.099591] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.099693] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.099729] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.100848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:31.595 [2024-11-26 04:05:33.100920] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.100976] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.100985] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.101819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:31.595 [2024-11-26 04:05:33.101841] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.101859] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.101867] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.102738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:31.595 [2024-11-26 04:05:33.102794] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.102854] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.102880] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75408) is not found. Dropping the request. 00:09:31.595 [2024-11-26 04:05:33.113413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.595 [2024-11-26 04:05:33.113627] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 75934 00:09:31.595 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.595 [Child] Asynchronous Event Request test 00:09:31.595 [Child] Attached to 0000:00:06.0 00:09:31.595 [Child] Attached to 0000:00:07.0 00:09:31.595 [Child] Attached to 0000:00:09.0 00:09:31.595 [Child] Attached to 0000:00:08.0 00:09:31.595 [Child] Registering asynchronous event callbacks... 00:09:31.595 [Child] Getting orig temperature thresholds of all controllers 00:09:31.595 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:31.595 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 [Child] Cleaning up... 00:09:31.595 Asynchronous Event Request test 00:09:31.595 Attached to 0000:00:06.0 00:09:31.595 Attached to 0000:00:07.0 00:09:31.595 Attached to 0000:00:09.0 00:09:31.595 Attached to 0000:00:08.0 00:09:31.595 Reset controller to setup AER completions for this process 00:09:31.595 Registering asynchronous event callbacks... 00:09:31.595 Getting orig temperature thresholds of all controllers 00:09:31.595 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.595 Setting all controllers temperature threshold low to trigger AER 00:09:31.595 Waiting for all controllers temperature threshold to be set lower 00:09:31.595 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:31.595 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:31.595 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:31.595 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.595 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:31.595 Waiting for all controllers to trigger AER and reset threshold 00:09:31.595 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.595 Cleaning up... 00:09:31.595 00:09:31.595 real 0m0.357s 00:09:31.595 user 0m0.100s 00:09:31.595 sys 0m0.152s 00:09:31.595 04:05:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.595 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.595 ************************************ 00:09:31.595 END TEST nvme_multi_aen 00:09:31.595 ************************************ 00:09:31.595 04:05:33 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.595 04:05:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:31.595 04:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.595 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.595 ************************************ 00:09:31.595 START TEST nvme_startup 00:09:31.595 ************************************ 00:09:31.595 04:05:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:31.854 Initializing NVMe Controllers 00:09:31.854 Attached to 0000:00:06.0 00:09:31.854 Attached to 0000:00:07.0 00:09:31.854 Attached to 0000:00:09.0 00:09:31.854 Attached to 0000:00:08.0 00:09:31.854 Initialization complete. 00:09:31.854 Time used:136459.562 (us). 00:09:31.854 ************************************ 00:09:31.854 END TEST nvme_startup 00:09:31.854 ************************************ 00:09:31.854 00:09:31.854 real 0m0.192s 00:09:31.854 user 0m0.050s 00:09:31.854 sys 0m0.092s 00:09:31.854 04:05:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.854 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.854 04:05:33 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:31.854 04:05:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.854 04:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.854 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.854 ************************************ 00:09:31.854 START TEST nvme_multi_secondary 00:09:31.854 ************************************ 00:09:31.854 04:05:33 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:09:31.854 04:05:33 -- nvme/nvme.sh@52 -- # pid0=75979 00:09:31.854 04:05:33 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:31.854 04:05:33 -- nvme/nvme.sh@54 -- # pid1=75980 00:09:31.854 04:05:33 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:31.854 04:05:33 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:35.138 Initializing NVMe Controllers 00:09:35.138 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.138 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:35.138 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:35.138 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:35.138 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:35.138 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:35.138 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:35.138 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:35.138 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:35.138 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:35.138 Initialization complete. Launching workers. 00:09:35.138 ======================================================== 00:09:35.138 Latency(us) 00:09:35.138 Device Information : IOPS MiB/s Average min max 00:09:35.138 PCIE (0000:00:06.0) NSID 1 from core 1: 6344.30 24.78 2520.43 760.07 8960.14 00:09:35.138 PCIE (0000:00:07.0) NSID 1 from core 1: 6343.63 24.78 2521.74 789.33 9394.08 00:09:35.138 PCIE (0000:00:09.0) NSID 1 from core 1: 6357.61 24.83 2516.39 687.12 10321.15 00:09:35.138 PCIE (0000:00:08.0) NSID 1 from core 1: 6357.61 24.83 2517.41 682.34 10126.60 00:09:35.138 PCIE (0000:00:08.0) NSID 2 from core 1: 6356.94 24.83 2517.92 757.01 9750.37 00:09:35.138 PCIE (0000:00:08.0) NSID 3 from core 1: 6353.61 24.82 2519.22 765.93 9184.58 00:09:35.138 ======================================================== 00:09:35.138 Total : 38113.70 148.88 2518.85 682.34 10321.15 00:09:35.138 00:09:35.400 Initializing NVMe Controllers 00:09:35.400 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.400 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:35.400 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:35.400 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:35.400 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:35.400 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:35.400 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:35.400 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:35.400 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:35.400 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:35.400 Initialization complete. Launching workers. 00:09:35.400 ======================================================== 00:09:35.400 Latency(us) 00:09:35.400 Device Information : IOPS MiB/s Average min max 00:09:35.400 PCIE (0000:00:06.0) NSID 1 from core 2: 2429.47 9.49 6584.37 1555.73 22921.50 00:09:35.400 PCIE (0000:00:07.0) NSID 1 from core 2: 2429.47 9.49 6586.77 1513.81 23010.68 00:09:35.400 PCIE (0000:00:09.0) NSID 1 from core 2: 2429.47 9.49 6585.97 1456.05 21448.93 00:09:35.400 PCIE (0000:00:08.0) NSID 1 from core 2: 2429.47 9.49 6586.15 1636.92 23119.47 00:09:35.400 PCIE (0000:00:08.0) NSID 2 from core 2: 2429.47 9.49 6585.93 1523.70 24914.31 00:09:35.400 PCIE (0000:00:08.0) NSID 3 from core 2: 2429.47 9.49 6586.90 1602.46 22054.82 00:09:35.400 ======================================================== 00:09:35.400 Total : 14576.79 56.94 6586.02 1456.05 24914.31 00:09:35.400 00:09:35.400 04:05:36 -- nvme/nvme.sh@56 -- # wait 75979 00:09:37.299 Initializing NVMe Controllers 00:09:37.299 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:37.299 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:37.299 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:37.299 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:37.299 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:37.299 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:37.299 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:37.299 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:37.299 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:37.299 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:37.299 Initialization complete. Launching workers. 00:09:37.299 ======================================================== 00:09:37.299 Latency(us) 00:09:37.299 Device Information : IOPS MiB/s Average min max 00:09:37.299 PCIE (0000:00:06.0) NSID 1 from core 0: 8935.49 34.90 1789.33 726.03 10358.07 00:09:37.299 PCIE (0000:00:07.0) NSID 1 from core 0: 8934.69 34.90 1790.39 765.47 9940.13 00:09:37.299 PCIE (0000:00:09.0) NSID 1 from core 0: 8929.89 34.88 1791.32 708.38 11076.30 00:09:37.299 PCIE (0000:00:08.0) NSID 1 from core 0: 8931.69 34.89 1790.94 571.89 10536.11 00:09:37.299 PCIE (0000:00:08.0) NSID 2 from core 0: 8934.49 34.90 1790.36 519.01 11610.67 00:09:37.299 PCIE (0000:00:08.0) NSID 3 from core 0: 8933.69 34.90 1790.50 430.29 10480.11 00:09:37.299 ======================================================== 00:09:37.299 Total : 53599.96 209.37 1790.47 430.29 11610.67 00:09:37.299 00:09:37.299 04:05:38 -- nvme/nvme.sh@57 -- # wait 75980 00:09:37.299 04:05:38 -- nvme/nvme.sh@61 -- # pid0=76049 00:09:37.299 04:05:38 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:37.299 04:05:38 -- nvme/nvme.sh@63 -- # pid1=76050 00:09:37.299 04:05:38 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:37.299 04:05:38 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:40.655 Initializing NVMe Controllers 00:09:40.655 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:40.655 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:40.655 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:40.655 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:40.655 Initialization complete. Launching workers. 00:09:40.655 ======================================================== 00:09:40.655 Latency(us) 00:09:40.655 Device Information : IOPS MiB/s Average min max 00:09:40.655 PCIE (0000:00:06.0) NSID 1 from core 0: 7623.76 29.78 2097.34 769.48 5579.47 00:09:40.655 PCIE (0000:00:07.0) NSID 1 from core 0: 7623.76 29.78 2098.36 796.24 6072.70 00:09:40.655 PCIE (0000:00:09.0) NSID 1 from core 0: 7623.76 29.78 2098.43 787.05 6346.90 00:09:40.655 PCIE (0000:00:08.0) NSID 1 from core 0: 7623.76 29.78 2098.47 764.58 6092.63 00:09:40.655 PCIE (0000:00:08.0) NSID 2 from core 0: 7623.76 29.78 2098.49 775.33 5700.00 00:09:40.655 PCIE (0000:00:08.0) NSID 3 from core 0: 7623.76 29.78 2098.49 787.37 6002.06 00:09:40.655 ======================================================== 00:09:40.655 Total : 45742.56 178.68 2098.26 764.58 6346.90 00:09:40.655 00:09:40.655 Initializing NVMe Controllers 00:09:40.655 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:40.655 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:40.655 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:40.655 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:40.655 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:40.655 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:40.655 Initialization complete. Launching workers. 00:09:40.655 ======================================================== 00:09:40.655 Latency(us) 00:09:40.655 Device Information : IOPS MiB/s Average min max 00:09:40.655 PCIE (0000:00:06.0) NSID 1 from core 1: 7699.83 30.08 2076.56 825.21 7157.52 00:09:40.655 PCIE (0000:00:07.0) NSID 1 from core 1: 7699.83 30.08 2077.52 845.51 6929.43 00:09:40.655 PCIE (0000:00:09.0) NSID 1 from core 1: 7699.83 30.08 2077.48 831.80 6578.93 00:09:40.655 PCIE (0000:00:08.0) NSID 1 from core 1: 7699.83 30.08 2077.41 845.44 6847.36 00:09:40.655 PCIE (0000:00:08.0) NSID 2 from core 1: 7699.83 30.08 2077.25 845.30 5805.72 00:09:40.655 PCIE (0000:00:08.0) NSID 3 from core 1: 7699.83 30.08 2077.08 844.63 5870.05 00:09:40.655 ======================================================== 00:09:40.655 Total : 46198.99 180.46 2077.22 825.21 7157.52 00:09:40.655 00:09:42.554 Initializing NVMe Controllers 00:09:42.554 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:42.554 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:42.554 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:42.554 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:42.554 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:42.554 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:42.554 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:42.554 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:42.554 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:42.554 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:42.554 Initialization complete. Launching workers. 00:09:42.554 ======================================================== 00:09:42.554 Latency(us) 00:09:42.554 Device Information : IOPS MiB/s Average min max 00:09:42.554 PCIE (0000:00:06.0) NSID 1 from core 2: 4566.44 17.84 3502.33 796.42 12318.08 00:09:42.554 PCIE (0000:00:07.0) NSID 1 from core 2: 4566.44 17.84 3503.18 756.32 14465.00 00:09:42.554 PCIE (0000:00:09.0) NSID 1 from core 2: 4566.44 17.84 3503.48 796.94 13226.70 00:09:42.554 PCIE (0000:00:08.0) NSID 1 from core 2: 4566.44 17.84 3503.06 810.75 13256.61 00:09:42.554 PCIE (0000:00:08.0) NSID 2 from core 2: 4566.44 17.84 3503.34 809.13 12885.71 00:09:42.554 PCIE (0000:00:08.0) NSID 3 from core 2: 4566.44 17.84 3502.74 778.85 12871.98 00:09:42.554 ======================================================== 00:09:42.554 Total : 27398.63 107.03 3503.02 756.32 14465.00 00:09:42.554 00:09:42.554 ************************************ 00:09:42.554 END TEST nvme_multi_secondary 00:09:42.554 ************************************ 00:09:42.554 04:05:44 -- nvme/nvme.sh@65 -- # wait 76049 00:09:42.554 04:05:44 -- nvme/nvme.sh@66 -- # wait 76050 00:09:42.554 00:09:42.554 real 0m10.635s 00:09:42.554 user 0m18.328s 00:09:42.554 sys 0m0.603s 00:09:42.554 04:05:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:42.554 04:05:44 -- common/autotest_common.sh@10 -- # set +x 00:09:42.554 04:05:44 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:42.554 04:05:44 -- nvme/nvme.sh@102 -- # kill_stub 00:09:42.554 04:05:44 -- common/autotest_common.sh@1075 -- # [[ -e /proc/74996 ]] 00:09:42.554 04:05:44 -- common/autotest_common.sh@1076 -- # kill 74996 00:09:42.554 04:05:44 -- common/autotest_common.sh@1077 -- # wait 74996 00:09:43.126 [2024-11-26 04:05:44.752960] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.126 [2024-11-26 04:05:44.753381] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.126 [2024-11-26 04:05:44.753425] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.126 [2024-11-26 04:05:44.753457] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.698 [2024-11-26 04:05:45.255233] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.698 [2024-11-26 04:05:45.255465] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.698 [2024-11-26 04:05:45.255524] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:43.698 [2024-11-26 04:05:45.255555] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:44.268 [2024-11-26 04:05:45.754711] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:44.268 [2024-11-26 04:05:45.754833] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:44.268 [2024-11-26 04:05:45.754867] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:44.268 [2024-11-26 04:05:45.754905] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:45.210 [2024-11-26 04:05:46.764932] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:45.210 [2024-11-26 04:05:46.765093] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:45.210 [2024-11-26 04:05:46.765132] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:45.210 [2024-11-26 04:05:46.765170] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75933) is not found. Dropping the request. 00:09:45.210 04:05:46 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:09:45.210 04:05:46 -- common/autotest_common.sh@1083 -- # echo 2 00:09:45.210 04:05:46 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:45.210 04:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.210 04:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.210 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:09:45.210 ************************************ 00:09:45.210 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:45.210 ************************************ 00:09:45.210 04:05:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:45.210 * Looking for test storage... 00:09:45.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.210 04:05:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:45.210 04:05:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:45.210 04:05:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:45.472 04:05:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:45.472 04:05:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:45.472 04:05:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:45.472 04:05:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:45.472 04:05:47 -- scripts/common.sh@335 -- # IFS=.-: 00:09:45.472 04:05:47 -- scripts/common.sh@335 -- # read -ra ver1 00:09:45.472 04:05:47 -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.472 04:05:47 -- scripts/common.sh@336 -- # read -ra ver2 00:09:45.472 04:05:47 -- scripts/common.sh@337 -- # local 'op=<' 00:09:45.472 04:05:47 -- scripts/common.sh@339 -- # ver1_l=2 00:09:45.472 04:05:47 -- scripts/common.sh@340 -- # ver2_l=1 00:09:45.472 04:05:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:45.472 04:05:47 -- scripts/common.sh@343 -- # case "$op" in 00:09:45.472 04:05:47 -- scripts/common.sh@344 -- # : 1 00:09:45.472 04:05:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:45.472 04:05:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.472 04:05:47 -- scripts/common.sh@364 -- # decimal 1 00:09:45.472 04:05:47 -- scripts/common.sh@352 -- # local d=1 00:09:45.472 04:05:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.472 04:05:47 -- scripts/common.sh@354 -- # echo 1 00:09:45.472 04:05:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:45.472 04:05:47 -- scripts/common.sh@365 -- # decimal 2 00:09:45.472 04:05:47 -- scripts/common.sh@352 -- # local d=2 00:09:45.472 04:05:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.472 04:05:47 -- scripts/common.sh@354 -- # echo 2 00:09:45.472 04:05:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:45.472 04:05:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:45.472 04:05:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:45.472 04:05:47 -- scripts/common.sh@367 -- # return 0 00:09:45.472 04:05:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.472 04:05:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 04:05:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 04:05:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 04:05:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.473 --rc genhtml_function_coverage=1 00:09:45.473 --rc genhtml_legend=1 00:09:45.473 --rc geninfo_all_blocks=1 00:09:45.473 --rc geninfo_unexecuted_blocks=1 00:09:45.473 00:09:45.473 ' 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:45.473 04:05:47 -- common/autotest_common.sh@1519 -- # bdfs=() 00:09:45.473 04:05:47 -- common/autotest_common.sh@1519 -- # local bdfs 00:09:45.473 04:05:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:45.473 04:05:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:45.473 04:05:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:45.473 04:05:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:45.473 04:05:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:45.473 04:05:47 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:45.473 04:05:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:45.473 04:05:47 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:45.473 04:05:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:45.473 04:05:47 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76234 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:45.473 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76234 00:09:45.473 04:05:47 -- common/autotest_common.sh@829 -- # '[' -z 76234 ']' 00:09:45.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.473 04:05:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.473 04:05:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.473 04:05:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.473 04:05:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.473 04:05:47 -- common/autotest_common.sh@10 -- # set +x 00:09:45.473 [2024-11-26 04:05:47.179724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:45.473 [2024-11-26 04:05:47.179843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76234 ] 00:09:45.734 [2024-11-26 04:05:47.342037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.734 [2024-11-26 04:05:47.386778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.735 [2024-11-26 04:05:47.387250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.735 [2024-11-26 04:05:47.387443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.735 [2024-11-26 04:05:47.387675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.735 [2024-11-26 04:05:47.387750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.308 04:05:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.308 04:05:47 -- common/autotest_common.sh@862 -- # return 0 00:09:46.308 04:05:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:09:46.308 04:05:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.308 04:05:47 -- common/autotest_common.sh@10 -- # set +x 00:09:46.308 nvme0n1 00:09:46.308 04:05:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WrmEU.txt 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:46.308 04:05:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.308 04:05:48 -- common/autotest_common.sh@10 -- # set +x 00:09:46.308 true 00:09:46.308 04:05:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732593948 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76257 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:46.308 04:05:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:48.842 04:05:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.842 04:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:48.842 [2024-11-26 04:05:50.077188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:48.842 [2024-11-26 04:05:50.077448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:48.842 [2024-11-26 04:05:50.077472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:48.842 [2024-11-26 04:05:50.077486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:48.842 [2024-11-26 04:05:50.079587] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.842 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76257 00:09:48.842 04:05:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76257 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76257 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:48.842 04:05:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.842 04:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:48.842 04:05:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WrmEU.txt 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WrmEU.txt 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76234 00:09:48.842 04:05:50 -- common/autotest_common.sh@936 -- # '[' -z 76234 ']' 00:09:48.842 04:05:50 -- common/autotest_common.sh@940 -- # kill -0 76234 00:09:48.842 04:05:50 -- common/autotest_common.sh@941 -- # uname 00:09:48.842 04:05:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:48.842 04:05:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76234 00:09:48.842 killing process with pid 76234 00:09:48.842 04:05:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:48.842 04:05:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:48.842 04:05:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76234' 00:09:48.842 04:05:50 -- common/autotest_common.sh@955 -- # kill 76234 00:09:48.842 04:05:50 -- common/autotest_common.sh@960 -- # wait 76234 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:48.842 04:05:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:48.842 00:09:48.842 real 0m3.557s 00:09:48.842 user 0m12.465s 00:09:48.842 sys 0m0.533s 00:09:48.843 04:05:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.843 ************************************ 00:09:48.843 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:48.843 ************************************ 00:09:48.843 04:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:48.843 04:05:50 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:48.843 04:05:50 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:48.843 04:05:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.843 04:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.843 04:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:48.843 ************************************ 00:09:48.843 START TEST nvme_fio 00:09:48.843 ************************************ 00:09:48.843 04:05:50 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:09:48.843 04:05:50 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:48.843 04:05:50 -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:48.843 04:05:50 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:48.843 04:05:50 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:48.843 04:05:50 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:48.843 04:05:50 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:48.843 04:05:50 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:48.843 04:05:50 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:48.843 04:05:50 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:48.843 04:05:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:48.843 04:05:50 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:09:48.843 04:05:50 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:48.843 04:05:50 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:48.843 04:05:50 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:48.843 04:05:50 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:49.102 04:05:50 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:49.102 04:05:50 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:49.360 04:05:50 -- nvme/nvme.sh@41 -- # bs=4096 00:09:49.360 04:05:50 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:49.360 04:05:50 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:49.360 04:05:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:09:49.360 04:05:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:49.360 04:05:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:09:49.360 04:05:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:49.360 04:05:50 -- common/autotest_common.sh@1330 -- # shift 00:09:49.360 04:05:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:09:49.360 04:05:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:09:49.360 04:05:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:49.360 04:05:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:09:49.360 04:05:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:09:49.360 04:05:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:49.360 04:05:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:49.360 04:05:50 -- common/autotest_common.sh@1336 -- # break 00:09:49.360 04:05:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:49.360 04:05:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:49.619 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:49.619 fio-3.35 00:09:49.619 Starting 1 thread 00:09:53.800 00:09:53.800 test: (groupid=0, jobs=1): err= 0: pid=76388: Tue Nov 26 04:05:55 2024 00:09:53.800 read: IOPS=17.3k, BW=67.8MiB/s (71.1MB/s)(137MiB/2022msec) 00:09:53.800 slat (nsec): min=3332, max=54291, avg=5113.09, stdev=2319.38 00:09:53.800 clat (usec): min=978, max=26402, avg=3067.61, stdev=1469.29 00:09:53.800 lat (usec): min=982, max=26406, avg=3072.73, stdev=1470.06 00:09:53.800 clat percentiles (usec): 00:09:53.800 | 1.00th=[ 1549], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:53.800 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:53.800 | 70.00th=[ 2769], 80.00th=[ 3326], 90.00th=[ 4752], 95.00th=[ 5735], 00:09:53.800 | 99.00th=[ 7963], 99.50th=[ 9503], 99.90th=[20841], 99.95th=[25822], 00:09:53.800 | 99.99th=[26346] 00:09:53.800 bw ( KiB/s): min=26160, max=89952, per=100.00%, avg=70096.00, stdev=29819.69, samples=4 00:09:53.800 iops : min= 6540, max=22488, avg=17524.00, stdev=7454.92, samples=4 00:09:53.800 write: IOPS=17.4k, BW=67.8MiB/s (71.1MB/s)(137MiB/2022msec); 0 zone resets 00:09:53.800 slat (nsec): min=3505, max=82317, avg=5385.60, stdev=2416.82 00:09:53.800 clat (usec): min=1020, max=52236, avg=4283.51, stdev=5848.78 00:09:53.800 lat (usec): min=1024, max=52241, avg=4288.89, stdev=5849.13 00:09:53.800 clat percentiles (usec): 00:09:53.800 | 1.00th=[ 1762], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2442], 00:09:53.800 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:53.800 | 70.00th=[ 2802], 80.00th=[ 3654], 90.00th=[ 5538], 95.00th=[18220], 00:09:53.800 | 99.00th=[33424], 99.50th=[36963], 99.90th=[47449], 99.95th=[49021], 00:09:53.800 | 99.99th=[51643] 00:09:53.800 bw ( KiB/s): min=26176, max=89792, per=100.00%, avg=70050.00, stdev=29739.99, samples=4 00:09:53.800 iops : min= 6544, max=22448, avg=17512.50, stdev=7435.00, samples=4 00:09:53.800 lat (usec) : 1000=0.01% 00:09:53.800 lat (msec) : 2=2.71%, 4=81.07%, 10=13.24%, 20=0.53%, 50=2.42% 00:09:53.800 lat (msec) : 100=0.01% 00:09:53.800 cpu : usr=99.16%, sys=0.05%, ctx=3, majf=0, minf=627 00:09:53.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:53.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.800 issued rwts: total=35083,35117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.800 00:09:53.800 Run status group 0 (all jobs): 00:09:53.801 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=137MiB (144MB), run=2022-2022msec 00:09:53.801 WRITE: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=137MiB (144MB), run=2022-2022msec 00:09:53.801 ----------------------------------------------------- 00:09:53.801 Suppressions used: 00:09:53.801 count bytes template 00:09:53.801 1 32 /usr/src/fio/parse.c 00:09:53.801 1 8 libtcmalloc_minimal.so 00:09:53.801 ----------------------------------------------------- 00:09:53.801 00:09:53.801 04:05:55 -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:53.801 04:05:55 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:53.801 04:05:55 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:53.801 04:05:55 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:54.059 04:05:55 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:54.059 04:05:55 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:54.317 04:05:55 -- nvme/nvme.sh@41 -- # bs=4096 00:09:54.317 04:05:55 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:54.317 04:05:55 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:54.317 04:05:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:09:54.317 04:05:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:54.317 04:05:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:09:54.317 04:05:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:54.317 04:05:55 -- common/autotest_common.sh@1330 -- # shift 00:09:54.317 04:05:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:09:54.317 04:05:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:09:54.317 04:05:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:54.317 04:05:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:09:54.317 04:05:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:09:54.317 04:05:55 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:54.317 04:05:55 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:54.317 04:05:55 -- common/autotest_common.sh@1336 -- # break 00:09:54.317 04:05:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:54.317 04:05:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:54.576 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:54.576 fio-3.35 00:09:54.576 Starting 1 thread 00:09:59.839 00:09:59.839 test: (groupid=0, jobs=1): err= 0: pid=76453: Tue Nov 26 04:06:01 2024 00:09:59.839 read: IOPS=18.6k, BW=72.5MiB/s (76.1MB/s)(147MiB/2031msec) 00:09:59.839 slat (nsec): min=3350, max=56467, avg=5185.32, stdev=2570.47 00:09:59.839 clat (usec): min=1049, max=34440, avg=3092.66, stdev=1538.55 00:09:59.839 lat (usec): min=1053, max=34445, avg=3097.85, stdev=1539.53 00:09:59.839 clat percentiles (usec): 00:09:59.839 | 1.00th=[ 1795], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:59.839 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:59.839 | 70.00th=[ 2769], 80.00th=[ 3097], 90.00th=[ 4948], 95.00th=[ 6128], 00:09:59.839 | 99.00th=[ 8356], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[33162], 00:09:59.839 | 99.99th=[34341] 00:09:59.839 bw ( KiB/s): min=40112, max=89008, per=100.00%, avg=75372.00, stdev=23548.40, samples=4 00:09:59.839 iops : min=10028, max=22252, avg=18843.00, stdev=5887.10, samples=4 00:09:59.839 write: IOPS=18.6k, BW=72.6MiB/s (76.1MB/s)(147MiB/2031msec); 0 zone resets 00:09:59.839 slat (nsec): min=3423, max=60916, avg=5459.53, stdev=2570.10 00:09:59.839 clat (usec): min=1076, max=72485, avg=3777.84, stdev=5123.14 00:09:59.839 lat (usec): min=1081, max=72489, avg=3783.30, stdev=5123.48 00:09:59.839 clat percentiles (usec): 00:09:59.839 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:59.839 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:59.839 | 70.00th=[ 2802], 80.00th=[ 3195], 90.00th=[ 5276], 95.00th=[ 6521], 00:09:59.839 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[59507], 00:09:59.839 | 99.99th=[70779] 00:09:59.839 bw ( KiB/s): min=39336, max=88384, per=100.00%, avg=75290.00, stdev=23990.53, samples=4 00:09:59.839 iops : min= 9834, max=22096, avg=18822.50, stdev=5997.63, samples=4 00:09:59.840 lat (msec) : 2=1.93%, 4=83.61%, 10=13.07%, 20=0.23%, 50=1.13% 00:09:59.840 lat (msec) : 100=0.04% 00:09:59.840 cpu : usr=99.31%, sys=0.00%, ctx=4, majf=0, minf=627 00:09:59.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:59.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.840 issued rwts: total=37717,37741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.840 00:09:59.840 Run status group 0 (all jobs): 00:09:59.840 READ: bw=72.5MiB/s (76.1MB/s), 72.5MiB/s-72.5MiB/s (76.1MB/s-76.1MB/s), io=147MiB (154MB), run=2031-2031msec 00:09:59.840 WRITE: bw=72.6MiB/s (76.1MB/s), 72.6MiB/s-72.6MiB/s (76.1MB/s-76.1MB/s), io=147MiB (155MB), run=2031-2031msec 00:09:59.840 ----------------------------------------------------- 00:09:59.840 Suppressions used: 00:09:59.840 count bytes template 00:09:59.840 1 32 /usr/src/fio/parse.c 00:09:59.840 1 8 libtcmalloc_minimal.so 00:09:59.840 ----------------------------------------------------- 00:09:59.840 00:09:59.840 04:06:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:59.840 04:06:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:59.840 04:06:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:59.840 04:06:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:00.101 04:06:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:00.101 04:06:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:00.101 04:06:01 -- nvme/nvme.sh@41 -- # bs=4096 00:10:00.101 04:06:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:00.101 04:06:01 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:00.101 04:06:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:00.101 04:06:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:00.101 04:06:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:00.101 04:06:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.101 04:06:01 -- common/autotest_common.sh@1330 -- # shift 00:10:00.101 04:06:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:00.101 04:06:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:00.101 04:06:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.101 04:06:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:00.101 04:06:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:00.101 04:06:01 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:00.101 04:06:01 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:00.101 04:06:01 -- common/autotest_common.sh@1336 -- # break 00:10:00.101 04:06:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:00.101 04:06:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:00.363 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:00.363 fio-3.35 00:10:00.363 Starting 1 thread 00:10:03.649 00:10:03.649 test: (groupid=0, jobs=1): err= 0: pid=76531: Tue Nov 26 04:06:05 2024 00:10:03.649 read: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(102MiB/2029msec) 00:10:03.649 slat (usec): min=3, max=120, avg= 5.37, stdev= 3.09 00:10:03.649 clat (usec): min=745, max=33163, avg=3741.95, stdev=2112.98 00:10:03.649 lat (usec): min=749, max=33168, avg=3747.32, stdev=2113.61 00:10:03.649 clat percentiles (usec): 00:10:03.649 | 1.00th=[ 1483], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2507], 00:10:03.649 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3097], 00:10:03.649 | 70.00th=[ 3949], 80.00th=[ 4948], 90.00th=[ 6259], 95.00th=[ 7898], 00:10:03.649 | 99.00th=[10683], 99.50th=[12518], 99.90th=[19006], 99.95th=[30278], 00:10:03.649 | 99.99th=[32900] 00:10:03.649 bw ( KiB/s): min=19128, max=81064, per=100.00%, avg=52088.00, stdev=33234.03, samples=4 00:10:03.649 iops : min= 4782, max=20266, avg=13022.00, stdev=8308.51, samples=4 00:10:03.649 write: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(102MiB/2029msec); 0 zone resets 00:10:03.649 slat (nsec): min=3466, max=86546, avg=5677.50, stdev=3097.54 00:10:03.649 clat (usec): min=884, max=58554, avg=6195.64, stdev=9480.26 00:10:03.649 lat (usec): min=889, max=58559, avg=6201.32, stdev=9480.44 00:10:03.649 clat percentiles (usec): 00:10:03.649 | 1.00th=[ 1614], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:10:03.649 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3195], 00:10:03.649 | 70.00th=[ 4178], 80.00th=[ 5407], 90.00th=[ 8717], 95.00th=[31589], 00:10:03.650 | 99.00th=[49021], 99.50th=[50594], 99.90th=[54264], 99.95th=[55837], 00:10:03.650 | 99.99th=[58459] 00:10:03.650 bw ( KiB/s): min=18216, max=80888, per=100.00%, avg=51922.00, stdev=33318.99, samples=4 00:10:03.650 iops : min= 4554, max=20222, avg=12980.50, stdev=8329.75, samples=4 00:10:03.650 lat (usec) : 750=0.01%, 1000=0.07% 00:10:03.650 lat (msec) : 2=2.62%, 4=66.72%, 10=25.39%, 20=1.19%, 50=3.67% 00:10:03.650 lat (msec) : 100=0.35% 00:10:03.650 cpu : usr=99.11%, sys=0.10%, ctx=11, majf=0, minf=628 00:10:03.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:03.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.650 issued rwts: total=26067,26065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.650 00:10:03.650 Run status group 0 (all jobs): 00:10:03.650 READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=102MiB (107MB), run=2029-2029msec 00:10:03.650 WRITE: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=102MiB (107MB), run=2029-2029msec 00:10:03.907 ----------------------------------------------------- 00:10:03.907 Suppressions used: 00:10:03.907 count bytes template 00:10:03.907 1 32 /usr/src/fio/parse.c 00:10:03.907 1 8 libtcmalloc_minimal.so 00:10:03.907 ----------------------------------------------------- 00:10:03.907 00:10:03.907 04:06:05 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:03.907 04:06:05 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:03.907 04:06:05 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:03.907 04:06:05 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:04.164 04:06:05 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:04.164 04:06:05 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:04.422 04:06:06 -- nvme/nvme.sh@41 -- # bs=4096 00:10:04.422 04:06:06 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:04.422 04:06:06 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:04.422 04:06:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:04.422 04:06:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:04.422 04:06:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:04.422 04:06:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.422 04:06:06 -- common/autotest_common.sh@1330 -- # shift 00:10:04.422 04:06:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:04.422 04:06:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:04.422 04:06:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.422 04:06:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:04.422 04:06:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:04.422 04:06:06 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:04.422 04:06:06 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:04.422 04:06:06 -- common/autotest_common.sh@1336 -- # break 00:10:04.422 04:06:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:04.422 04:06:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:04.679 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:04.679 fio-3.35 00:10:04.679 Starting 1 thread 00:10:11.235 00:10:11.235 test: (groupid=0, jobs=1): err= 0: pid=76591: Tue Nov 26 04:06:11 2024 00:10:11.235 read: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(178MiB/2001msec) 00:10:11.235 slat (usec): min=3, max=363, avg= 4.97, stdev= 2.76 00:10:11.235 clat (usec): min=199, max=11142, avg=2806.86, stdev=787.74 00:10:11.235 lat (usec): min=203, max=11204, avg=2811.83, stdev=789.13 00:10:11.235 clat percentiles (usec): 00:10:11.235 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2474], 00:10:11.235 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:10:11.235 | 70.00th=[ 2638], 80.00th=[ 2802], 90.00th=[ 3458], 95.00th=[ 4621], 00:10:11.235 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 8291], 99.95th=[ 8848], 00:10:11.235 | 99.99th=[11076] 00:10:11.235 bw ( KiB/s): min=88688, max=90352, per=98.34%, avg=89576.00, stdev=837.63, samples=3 00:10:11.235 iops : min=22172, max=22588, avg=22394.00, stdev=209.41, samples=3 00:10:11.235 write: IOPS=22.6k, BW=88.4MiB/s (92.7MB/s)(177MiB/2001msec); 0 zone resets 00:10:11.235 slat (nsec): min=3408, max=94972, avg=5366.92, stdev=2236.32 00:10:11.235 clat (usec): min=214, max=11068, avg=2810.33, stdev=807.01 00:10:11.236 lat (usec): min=218, max=11083, avg=2815.69, stdev=808.43 00:10:11.236 clat percentiles (usec): 00:10:11.236 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2474], 00:10:11.236 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2540], 60.00th=[ 2573], 00:10:11.236 | 70.00th=[ 2638], 80.00th=[ 2802], 90.00th=[ 3490], 95.00th=[ 4752], 00:10:11.236 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 8291], 99.95th=[ 9372], 00:10:11.236 | 99.99th=[10814] 00:10:11.236 bw ( KiB/s): min=89392, max=90032, per=99.08%, avg=89730.67, stdev=321.63, samples=3 00:10:11.236 iops : min=22348, max=22508, avg=22432.67, stdev=80.41, samples=3 00:10:11.236 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:11.236 lat (msec) : 2=0.73%, 4=91.92%, 10=7.26%, 20=0.04% 00:10:11.236 cpu : usr=99.25%, sys=0.00%, ctx=6, majf=0, minf=625 00:10:11.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:11.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.236 issued rwts: total=45568,45304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.236 00:10:11.236 Run status group 0 (all jobs): 00:10:11.236 READ: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:11.236 WRITE: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=177MiB (186MB), run=2001-2001msec 00:10:11.236 ----------------------------------------------------- 00:10:11.236 Suppressions used: 00:10:11.236 count bytes template 00:10:11.236 1 32 /usr/src/fio/parse.c 00:10:11.236 1 8 libtcmalloc_minimal.so 00:10:11.236 ----------------------------------------------------- 00:10:11.236 00:10:11.236 04:06:12 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:11.236 04:06:12 -- nvme/nvme.sh@46 -- # true 00:10:11.236 00:10:11.236 real 0m21.589s 00:10:11.236 user 0m15.761s 00:10:11.236 sys 0m8.620s 00:10:11.236 04:06:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.236 ************************************ 00:10:11.236 END TEST nvme_fio 00:10:11.236 ************************************ 00:10:11.236 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:11.236 ************************************ 00:10:11.236 END TEST nvme 00:10:11.236 ************************************ 00:10:11.236 00:10:11.236 real 1m32.797s 00:10:11.236 user 3m32.344s 00:10:11.236 sys 0m19.245s 00:10:11.236 04:06:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.236 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:11.236 04:06:12 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:11.236 04:06:12 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:11.236 04:06:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.236 04:06:12 -- common/autotest_common.sh@10 -- # set +x 00:10:11.236 ************************************ 00:10:11.236 START TEST nvme_scc 00:10:11.236 ************************************ 00:10:11.236 04:06:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:11.236 * Looking for test storage... 00:10:11.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:11.236 04:06:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:11.236 04:06:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:11.236 04:06:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:11.236 04:06:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:11.236 04:06:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:11.236 04:06:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:11.236 04:06:12 -- scripts/common.sh@335 -- # IFS=.-: 00:10:11.236 04:06:12 -- scripts/common.sh@335 -- # read -ra ver1 00:10:11.236 04:06:12 -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.236 04:06:12 -- scripts/common.sh@336 -- # read -ra ver2 00:10:11.236 04:06:12 -- scripts/common.sh@337 -- # local 'op=<' 00:10:11.236 04:06:12 -- scripts/common.sh@339 -- # ver1_l=2 00:10:11.236 04:06:12 -- scripts/common.sh@340 -- # ver2_l=1 00:10:11.236 04:06:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:11.236 04:06:12 -- scripts/common.sh@343 -- # case "$op" in 00:10:11.236 04:06:12 -- scripts/common.sh@344 -- # : 1 00:10:11.236 04:06:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:11.236 04:06:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.236 04:06:12 -- scripts/common.sh@364 -- # decimal 1 00:10:11.236 04:06:12 -- scripts/common.sh@352 -- # local d=1 00:10:11.236 04:06:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.236 04:06:12 -- scripts/common.sh@354 -- # echo 1 00:10:11.236 04:06:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:11.236 04:06:12 -- scripts/common.sh@365 -- # decimal 2 00:10:11.236 04:06:12 -- scripts/common.sh@352 -- # local d=2 00:10:11.236 04:06:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.236 04:06:12 -- scripts/common.sh@354 -- # echo 2 00:10:11.236 04:06:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:11.236 04:06:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:11.236 04:06:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:11.236 04:06:12 -- scripts/common.sh@367 -- # return 0 00:10:11.236 04:06:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:11.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.236 --rc genhtml_branch_coverage=1 00:10:11.236 --rc genhtml_function_coverage=1 00:10:11.236 --rc genhtml_legend=1 00:10:11.236 --rc geninfo_all_blocks=1 00:10:11.236 --rc geninfo_unexecuted_blocks=1 00:10:11.236 00:10:11.236 ' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:11.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.236 --rc genhtml_branch_coverage=1 00:10:11.236 --rc genhtml_function_coverage=1 00:10:11.236 --rc genhtml_legend=1 00:10:11.236 --rc geninfo_all_blocks=1 00:10:11.236 --rc geninfo_unexecuted_blocks=1 00:10:11.236 00:10:11.236 ' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:11.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.236 --rc genhtml_branch_coverage=1 00:10:11.236 --rc genhtml_function_coverage=1 00:10:11.236 --rc genhtml_legend=1 00:10:11.236 --rc geninfo_all_blocks=1 00:10:11.236 --rc geninfo_unexecuted_blocks=1 00:10:11.236 00:10:11.236 ' 00:10:11.236 04:06:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:11.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.236 --rc genhtml_branch_coverage=1 00:10:11.236 --rc genhtml_function_coverage=1 00:10:11.236 --rc genhtml_legend=1 00:10:11.236 --rc geninfo_all_blocks=1 00:10:11.236 --rc geninfo_unexecuted_blocks=1 00:10:11.236 00:10:11.236 ' 00:10:11.236 04:06:12 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:11.236 04:06:12 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:11.236 04:06:12 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:11.236 04:06:12 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:11.236 04:06:12 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.236 04:06:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.236 04:06:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.236 04:06:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.236 04:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.236 04:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.236 04:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.236 04:06:12 -- paths/export.sh@5 -- # export PATH 00:10:11.236 04:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.236 04:06:12 -- nvme/functions.sh@10 -- # ctrls=() 00:10:11.236 04:06:12 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:11.236 04:06:12 -- nvme/functions.sh@11 -- # nvmes=() 00:10:11.236 04:06:12 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:11.236 04:06:12 -- nvme/functions.sh@12 -- # bdfs=() 00:10:11.236 04:06:12 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:11.236 04:06:12 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:11.236 04:06:12 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:11.236 04:06:12 -- nvme/functions.sh@14 -- # nvme_name= 00:10:11.236 04:06:12 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.236 04:06:12 -- nvme/nvme_scc.sh@12 -- # uname 00:10:11.236 04:06:12 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:11.236 04:06:12 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:11.236 04:06:12 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.237 Waiting for block devices as requested 00:10:11.237 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.237 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.237 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.237 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.547 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:16.547 04:06:18 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:16.547 04:06:18 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:16.547 04:06:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.547 04:06:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:16.547 04:06:18 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:16.547 04:06:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:16.547 04:06:18 -- scripts/common.sh@15 -- # local i 00:10:16.547 04:06:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:16.547 04:06:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.547 04:06:18 -- scripts/common.sh@24 -- # return 0 00:10:16.547 04:06:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:16.547 04:06:18 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:16.547 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.548 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:16.548 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.548 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.548 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.548 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.549 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:16.549 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.549 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:16.550 04:06:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:16.550 04:06:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:16.550 04:06:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:16.550 04:06:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:16.550 04:06:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.550 04:06:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:16.550 04:06:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:16.550 04:06:18 -- scripts/common.sh@15 -- # local i 00:10:16.550 04:06:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:16.550 04:06:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.550 04:06:18 -- scripts/common.sh@24 -- # return 0 00:10:16.550 04:06:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:16.550 04:06:18 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:16.550 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.550 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.550 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:16.550 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:16.550 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:16.551 04:06:18 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.551 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.551 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.552 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:16.552 04:06:18 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:16.552 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:16.553 04:06:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:16.553 04:06:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:16.553 04:06:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:16.553 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.553 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.553 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.553 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:16.553 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.554 04:06:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.554 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.554 04:06:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:16.554 04:06:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:16.554 04:06:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:16.555 04:06:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:16.555 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.555 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.555 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:16.555 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.555 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:16.556 04:06:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:16.556 04:06:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:16.556 04:06:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:16.556 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.556 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.556 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:16.556 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.556 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.557 04:06:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.557 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.557 04:06:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:16.557 04:06:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:16.557 04:06:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:16.557 04:06:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:16.557 04:06:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:16.557 04:06:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.557 04:06:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:16.557 04:06:18 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:16.557 04:06:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:16.557 04:06:18 -- scripts/common.sh@15 -- # local i 00:10:16.558 04:06:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:16.558 04:06:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.558 04:06:18 -- scripts/common.sh@24 -- # return 0 00:10:16.558 04:06:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:16.558 04:06:18 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:16.558 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.558 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:16.558 04:06:18 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.558 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.558 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.559 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:16.559 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.559 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:16.560 04:06:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:16.560 04:06:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:16.560 04:06:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:16.560 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.560 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.560 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:16.560 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:16.560 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.561 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.561 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.561 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:16.562 04:06:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:16.562 04:06:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:16.562 04:06:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:16.562 04:06:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:16.562 04:06:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.562 04:06:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:16.562 04:06:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:16.562 04:06:18 -- scripts/common.sh@15 -- # local i 00:10:16.562 04:06:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:16.562 04:06:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.562 04:06:18 -- scripts/common.sh@24 -- # return 0 00:10:16.562 04:06:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:16.562 04:06:18 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:16.562 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.562 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.562 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:16.562 04:06:18 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.562 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.563 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:16.563 04:06:18 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.563 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:16.564 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.564 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.564 04:06:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:16.824 04:06:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:16.824 04:06:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:16.824 04:06:18 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:16.824 04:06:18 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@18 -- # shift 00:10:16.824 04:06:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.824 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.824 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:16.824 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:16.825 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.825 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.825 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.826 04:06:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.826 04:06:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.826 04:06:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:16.826 04:06:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:16.826 04:06:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:16.826 04:06:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:16.826 04:06:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:16.826 04:06:18 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:16.826 04:06:18 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:16.826 04:06:18 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:16.826 04:06:18 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:16.826 04:06:18 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:16.826 04:06:18 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:16.826 04:06:18 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:16.826 04:06:18 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:16.826 04:06:18 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:16.826 04:06:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:16.826 04:06:18 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:16.826 04:06:18 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:16.826 04:06:18 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:16.826 04:06:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:16.826 04:06:18 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:16.826 04:06:18 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:16.826 04:06:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:16.826 04:06:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:16.826 04:06:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:16.826 04:06:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:16.826 04:06:18 -- nvme/functions.sh@197 -- # echo nvme1 00:10:16.826 04:06:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:16.826 04:06:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:16.826 04:06:18 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:16.826 04:06:18 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:16.826 04:06:18 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:16.826 04:06:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:16.827 04:06:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:16.827 04:06:18 -- nvme/functions.sh@197 -- # echo nvme0 00:10:16.827 04:06:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:16.827 04:06:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:16.827 04:06:18 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:16.827 04:06:18 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:16.827 04:06:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:16.827 04:06:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:16.827 04:06:18 -- nvme/functions.sh@197 -- # echo nvme3 00:10:16.827 04:06:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:16.827 04:06:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:16.827 04:06:18 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:16.827 04:06:18 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:16.827 04:06:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:16.827 04:06:18 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:16.827 04:06:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:16.827 04:06:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:16.827 04:06:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:16.827 04:06:18 -- nvme/functions.sh@197 -- # echo nvme2 00:10:16.827 04:06:18 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:16.827 04:06:18 -- nvme/functions.sh@206 -- # echo nvme1 00:10:16.827 04:06:18 -- nvme/functions.sh@207 -- # return 0 00:10:16.827 04:06:18 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:16.827 04:06:18 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:16.827 04:06:18 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.651 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.651 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.651 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.651 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.909 04:06:19 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:17.909 04:06:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:17.909 04:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.909 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:10:17.909 ************************************ 00:10:17.909 START TEST nvme_simple_copy 00:10:17.909 ************************************ 00:10:17.909 04:06:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:17.909 Initializing NVMe Controllers 00:10:17.909 Attaching to 0000:00:08.0 00:10:17.909 Controller supports SCC. Attached to 0000:00:08.0 00:10:17.909 Namespace ID: 1 size: 4GB 00:10:17.909 Initialization complete. 00:10:17.909 00:10:17.909 Controller QEMU NVMe Ctrl (12342 ) 00:10:17.909 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:17.909 Namespace Block Size:4096 00:10:17.909 Writing LBAs 0 to 63 with Random Data 00:10:17.909 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:17.909 LBAs matching Written Data: 64 00:10:17.909 00:10:17.909 real 0m0.233s 00:10:17.909 user 0m0.070s 00:10:17.909 sys 0m0.061s 00:10:17.909 04:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:17.909 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:10:17.909 ************************************ 00:10:17.909 END TEST nvme_simple_copy 00:10:17.909 ************************************ 00:10:18.166 ************************************ 00:10:18.166 END TEST nvme_scc 00:10:18.166 ************************************ 00:10:18.166 00:10:18.166 real 0m7.545s 00:10:18.166 user 0m1.031s 00:10:18.166 sys 0m1.415s 00:10:18.166 04:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.166 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.166 04:06:19 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:18.167 04:06:19 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:18.167 04:06:19 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:18.167 04:06:19 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:18.167 04:06:19 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:18.167 04:06:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.167 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.167 ************************************ 00:10:18.167 START TEST nvme_fdp 00:10:18.167 ************************************ 00:10:18.167 04:06:19 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:18.167 * Looking for test storage... 00:10:18.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:18.167 04:06:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.167 04:06:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.167 04:06:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.167 04:06:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.167 04:06:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.167 04:06:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.167 04:06:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.167 04:06:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.167 04:06:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.167 04:06:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.167 04:06:19 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.167 04:06:19 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.167 04:06:19 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.167 04:06:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.167 04:06:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.167 04:06:19 -- scripts/common.sh@344 -- # : 1 00:10:18.167 04:06:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.167 04:06:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.167 04:06:19 -- scripts/common.sh@364 -- # decimal 1 00:10:18.167 04:06:19 -- scripts/common.sh@352 -- # local d=1 00:10:18.167 04:06:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.167 04:06:19 -- scripts/common.sh@354 -- # echo 1 00:10:18.167 04:06:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.167 04:06:19 -- scripts/common.sh@365 -- # decimal 2 00:10:18.167 04:06:19 -- scripts/common.sh@352 -- # local d=2 00:10:18.167 04:06:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.167 04:06:19 -- scripts/common.sh@354 -- # echo 2 00:10:18.167 04:06:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.167 04:06:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.167 04:06:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.167 04:06:19 -- scripts/common.sh@367 -- # return 0 00:10:18.167 04:06:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.167 --rc genhtml_branch_coverage=1 00:10:18.167 --rc genhtml_function_coverage=1 00:10:18.167 --rc genhtml_legend=1 00:10:18.167 --rc geninfo_all_blocks=1 00:10:18.167 --rc geninfo_unexecuted_blocks=1 00:10:18.167 00:10:18.167 ' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.167 --rc genhtml_branch_coverage=1 00:10:18.167 --rc genhtml_function_coverage=1 00:10:18.167 --rc genhtml_legend=1 00:10:18.167 --rc geninfo_all_blocks=1 00:10:18.167 --rc geninfo_unexecuted_blocks=1 00:10:18.167 00:10:18.167 ' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.167 --rc genhtml_branch_coverage=1 00:10:18.167 --rc genhtml_function_coverage=1 00:10:18.167 --rc genhtml_legend=1 00:10:18.167 --rc geninfo_all_blocks=1 00:10:18.167 --rc geninfo_unexecuted_blocks=1 00:10:18.167 00:10:18.167 ' 00:10:18.167 04:06:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.167 --rc genhtml_branch_coverage=1 00:10:18.167 --rc genhtml_function_coverage=1 00:10:18.167 --rc genhtml_legend=1 00:10:18.167 --rc geninfo_all_blocks=1 00:10:18.167 --rc geninfo_unexecuted_blocks=1 00:10:18.167 00:10:18.167 ' 00:10:18.167 04:06:19 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.167 04:06:19 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.167 04:06:19 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:18.167 04:06:19 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:18.167 04:06:19 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.167 04:06:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.167 04:06:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.167 04:06:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.167 04:06:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.167 04:06:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.167 04:06:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.167 04:06:19 -- paths/export.sh@5 -- # export PATH 00:10:18.167 04:06:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.167 04:06:19 -- nvme/functions.sh@10 -- # ctrls=() 00:10:18.167 04:06:19 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:18.167 04:06:19 -- nvme/functions.sh@11 -- # nvmes=() 00:10:18.167 04:06:19 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:18.167 04:06:19 -- nvme/functions.sh@12 -- # bdfs=() 00:10:18.167 04:06:19 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:18.167 04:06:19 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:18.167 04:06:19 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:18.167 04:06:19 -- nvme/functions.sh@14 -- # nvme_name= 00:10:18.167 04:06:19 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.167 04:06:19 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:18.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.734 Waiting for block devices as requested 00:10:18.734 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.734 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.992 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.992 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.280 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:24.281 04:06:25 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:24.281 04:06:25 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:24.281 04:06:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.281 04:06:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:24.281 04:06:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:24.281 04:06:25 -- scripts/common.sh@15 -- # local i 00:10:24.281 04:06:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:24.281 04:06:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.281 04:06:25 -- scripts/common.sh@24 -- # return 0 00:10:24.281 04:06:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:24.281 04:06:25 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:24.281 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.281 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.281 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:24.281 04:06:25 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.281 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.282 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.282 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.282 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:24.283 04:06:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:24.283 04:06:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:24.283 04:06:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:24.283 04:06:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:24.283 04:06:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.283 04:06:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:24.283 04:06:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:24.283 04:06:25 -- scripts/common.sh@15 -- # local i 00:10:24.283 04:06:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:24.283 04:06:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.283 04:06:25 -- scripts/common.sh@24 -- # return 0 00:10:24.283 04:06:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:24.283 04:06:25 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:24.283 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.283 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.283 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:24.283 04:06:25 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:24.283 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:24.284 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.284 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.284 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.285 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:24.285 04:06:25 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.285 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:24.286 04:06:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.286 04:06:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:24.286 04:06:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:24.286 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.286 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.286 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.286 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.286 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.287 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.287 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.287 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:24.288 04:06:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.288 04:06:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:24.288 04:06:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:24.288 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.288 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.288 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:24.288 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:24.288 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:24.289 04:06:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.289 04:06:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:24.289 04:06:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:24.289 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.289 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.289 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.289 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.289 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.290 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.290 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.290 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:24.291 04:06:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:24.291 04:06:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:24.291 04:06:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:24.291 04:06:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:24.291 04:06:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.291 04:06:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:24.291 04:06:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:24.291 04:06:25 -- scripts/common.sh@15 -- # local i 00:10:24.291 04:06:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:24.291 04:06:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.291 04:06:25 -- scripts/common.sh@24 -- # return 0 00:10:24.291 04:06:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:24.291 04:06:25 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:24.291 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.291 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.291 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.291 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:24.291 04:06:25 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.292 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:24.292 04:06:25 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.292 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:24.293 04:06:25 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:24.293 04:06:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.293 04:06:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:24.293 04:06:25 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:24.293 04:06:25 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:24.293 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.293 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.293 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.293 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:24.293 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.294 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.294 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.294 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:24.295 04:06:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:24.295 04:06:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:24.295 04:06:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:24.295 04:06:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:24.295 04:06:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.295 04:06:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:24.295 04:06:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:24.295 04:06:25 -- scripts/common.sh@15 -- # local i 00:10:24.295 04:06:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:24.295 04:06:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.295 04:06:25 -- scripts/common.sh@24 -- # return 0 00:10:24.295 04:06:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:24.295 04:06:25 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:24.295 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.295 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.295 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.295 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:24.295 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.296 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:24.296 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:24.296 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.297 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:24.297 04:06:25 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:24.297 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:24.298 04:06:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.298 04:06:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:24.298 04:06:25 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:24.298 04:06:25 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@18 -- # shift 00:10:24.298 04:06:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.298 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.298 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.298 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.299 04:06:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.299 04:06:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.299 04:06:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:24.299 04:06:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:24.299 04:06:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:24.299 04:06:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:24.299 04:06:25 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:24.299 04:06:25 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:24.299 04:06:25 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:24.299 04:06:25 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:24.299 04:06:25 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:24.299 04:06:25 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:24.299 04:06:25 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:24.299 04:06:25 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.299 04:06:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:24.299 04:06:25 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:24.299 04:06:25 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:24.299 04:06:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:24.299 04:06:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.299 04:06:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.299 04:06:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.299 04:06:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:24.299 04:06:25 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:24.299 04:06:25 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:24.299 04:06:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:24.299 04:06:25 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@76 -- # echo 0x88010 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:24.299 04:06:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.299 04:06:25 -- nvme/functions.sh@197 -- # echo nvme0 00:10:24.299 04:06:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.299 04:06:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:24.299 04:06:25 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:24.299 04:06:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.299 04:06:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.299 04:06:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.299 04:06:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.300 04:06:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.300 04:06:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:24.300 04:06:25 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:24.300 04:06:25 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:24.300 04:06:25 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:24.300 04:06:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:24.300 04:06:25 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:24.300 04:06:25 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:24.300 04:06:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:24.300 04:06:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.300 04:06:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.300 04:06:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.300 04:06:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.300 04:06:25 -- nvme/functions.sh@204 -- # trap - ERR 00:10:24.300 04:06:25 -- nvme/functions.sh@204 -- # print_backtrace 00:10:24.300 04:06:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:24.300 04:06:25 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.300 04:06:25 -- nvme/functions.sh@204 -- # trap - ERR 00:10:24.300 04:06:25 -- nvme/functions.sh@204 -- # print_backtrace 00:10:24.300 04:06:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:24.300 04:06:25 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.300 04:06:25 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:24.300 04:06:25 -- nvme/functions.sh@206 -- # echo nvme0 00:10:24.300 04:06:25 -- nvme/functions.sh@207 -- # return 0 00:10:24.300 04:06:25 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:10:24.300 04:06:25 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:10:24.300 04:06:25 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:25.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.244 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.244 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.244 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.244 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.244 04:06:26 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:25.244 04:06:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:25.244 04:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.244 04:06:26 -- common/autotest_common.sh@10 -- # set +x 00:10:25.244 ************************************ 00:10:25.244 START TEST nvme_flexible_data_placement 00:10:25.244 ************************************ 00:10:25.244 04:06:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:25.505 Initializing NVMe Controllers 00:10:25.505 Attaching to 0000:00:09.0 00:10:25.505 Controller supports FDP Attached to 0000:00:09.0 00:10:25.505 Namespace ID: 1 Endurance Group ID: 1 00:10:25.505 Initialization complete. 00:10:25.505 00:10:25.505 ================================== 00:10:25.505 == FDP tests for Namespace: #01 == 00:10:25.505 ================================== 00:10:25.505 00:10:25.505 Get Feature: FDP: 00:10:25.505 ================= 00:10:25.505 Enabled: Yes 00:10:25.505 FDP configuration Index: 0 00:10:25.505 00:10:25.505 FDP configurations log page 00:10:25.505 =========================== 00:10:25.505 Number of FDP configurations: 1 00:10:25.505 Version: 0 00:10:25.505 Size: 112 00:10:25.505 FDP Configuration Descriptor: 0 00:10:25.505 Descriptor Size: 96 00:10:25.505 Reclaim Group Identifier format: 2 00:10:25.505 FDP Volatile Write Cache: Not Present 00:10:25.505 FDP Configuration: Valid 00:10:25.505 Vendor Specific Size: 0 00:10:25.505 Number of Reclaim Groups: 2 00:10:25.505 Number of Recalim Unit Handles: 8 00:10:25.505 Max Placement Identifiers: 128 00:10:25.505 Number of Namespaces Suppprted: 256 00:10:25.505 Reclaim unit Nominal Size: 6000000 bytes 00:10:25.505 Estimated Reclaim Unit Time Limit: Not Reported 00:10:25.505 RUH Desc #000: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #001: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #002: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #003: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #004: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #005: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #006: RUH Type: Initially Isolated 00:10:25.505 RUH Desc #007: RUH Type: Initially Isolated 00:10:25.505 00:10:25.505 FDP reclaim unit handle usage log page 00:10:25.505 ====================================== 00:10:25.505 Number of Reclaim Unit Handles: 8 00:10:25.505 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:25.505 RUH Usage Desc #001: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #002: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #003: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #004: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #005: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #006: RUH Attributes: Unused 00:10:25.505 RUH Usage Desc #007: RUH Attributes: Unused 00:10:25.505 00:10:25.505 FDP statistics log page 00:10:25.505 ======================= 00:10:25.505 Host bytes with metadata written: 1732964352 00:10:25.505 Media bytes with metadata written: 1733496832 00:10:25.505 Media bytes erased: 0 00:10:25.505 00:10:25.505 FDP Reclaim unit handle status 00:10:25.505 ============================== 00:10:25.505 Number of RUHS descriptors: 2 00:10:25.505 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004b51 00:10:25.505 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:25.505 00:10:25.505 FDP write on placement id: 0 success 00:10:25.505 00:10:25.505 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:25.505 00:10:25.505 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:25.505 00:10:25.505 Get Feature: FDP Events for Placement handle: #0 00:10:25.505 ======================== 00:10:25.505 Number of FDP Events: 6 00:10:25.505 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:25.505 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:25.505 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:25.505 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:25.505 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:25.505 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:25.505 00:10:25.505 FDP events log page 00:10:25.505 =================== 00:10:25.505 Number of FDP events: 1 00:10:25.505 FDP Event #0: 00:10:25.505 Event Type: RU Not Written to Capacity 00:10:25.505 Placement Identifier: Valid 00:10:25.505 NSID: Valid 00:10:25.505 Location: Valid 00:10:25.505 Placement Identifier: 0 00:10:25.505 Event Timestamp: 4 00:10:25.505 Namespace Identifier: 1 00:10:25.505 Reclaim Group Identifier: 0 00:10:25.505 Reclaim Unit Handle Identifier: 0 00:10:25.505 00:10:25.505 FDP test passed 00:10:25.505 00:10:25.505 real 0m0.211s 00:10:25.505 user 0m0.049s 00:10:25.505 sys 0m0.061s 00:10:25.505 04:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.505 ************************************ 00:10:25.505 END TEST nvme_flexible_data_placement 00:10:25.505 ************************************ 00:10:25.505 04:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:25.505 ************************************ 00:10:25.505 END TEST nvme_fdp 00:10:25.505 ************************************ 00:10:25.505 00:10:25.505 real 0m7.488s 00:10:25.505 user 0m0.999s 00:10:25.505 sys 0m1.407s 00:10:25.505 04:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.505 04:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:25.505 04:06:27 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:25.505 04:06:27 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:25.505 04:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:25.505 04:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.505 04:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 ************************************ 00:10:25.766 START TEST nvme_rpc 00:10:25.766 ************************************ 00:10:25.766 04:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:25.766 * Looking for test storage... 00:10:25.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:25.766 04:06:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:25.766 04:06:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:25.766 04:06:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:25.766 04:06:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:25.766 04:06:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:25.766 04:06:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:25.766 04:06:27 -- scripts/common.sh@335 -- # IFS=.-: 00:10:25.766 04:06:27 -- scripts/common.sh@335 -- # read -ra ver1 00:10:25.766 04:06:27 -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.766 04:06:27 -- scripts/common.sh@336 -- # read -ra ver2 00:10:25.766 04:06:27 -- scripts/common.sh@337 -- # local 'op=<' 00:10:25.766 04:06:27 -- scripts/common.sh@339 -- # ver1_l=2 00:10:25.766 04:06:27 -- scripts/common.sh@340 -- # ver2_l=1 00:10:25.766 04:06:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:25.766 04:06:27 -- scripts/common.sh@343 -- # case "$op" in 00:10:25.766 04:06:27 -- scripts/common.sh@344 -- # : 1 00:10:25.766 04:06:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:25.766 04:06:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.766 04:06:27 -- scripts/common.sh@364 -- # decimal 1 00:10:25.766 04:06:27 -- scripts/common.sh@352 -- # local d=1 00:10:25.766 04:06:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.766 04:06:27 -- scripts/common.sh@354 -- # echo 1 00:10:25.766 04:06:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:25.766 04:06:27 -- scripts/common.sh@365 -- # decimal 2 00:10:25.766 04:06:27 -- scripts/common.sh@352 -- # local d=2 00:10:25.766 04:06:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.766 04:06:27 -- scripts/common.sh@354 -- # echo 2 00:10:25.766 04:06:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:25.766 04:06:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:25.766 04:06:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:25.766 04:06:27 -- scripts/common.sh@367 -- # return 0 00:10:25.766 04:06:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.766 --rc genhtml_branch_coverage=1 00:10:25.766 --rc genhtml_function_coverage=1 00:10:25.766 --rc genhtml_legend=1 00:10:25.766 --rc geninfo_all_blocks=1 00:10:25.766 --rc geninfo_unexecuted_blocks=1 00:10:25.766 00:10:25.766 ' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.766 --rc genhtml_branch_coverage=1 00:10:25.766 --rc genhtml_function_coverage=1 00:10:25.766 --rc genhtml_legend=1 00:10:25.766 --rc geninfo_all_blocks=1 00:10:25.766 --rc geninfo_unexecuted_blocks=1 00:10:25.766 00:10:25.766 ' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.766 --rc genhtml_branch_coverage=1 00:10:25.766 --rc genhtml_function_coverage=1 00:10:25.766 --rc genhtml_legend=1 00:10:25.766 --rc geninfo_all_blocks=1 00:10:25.766 --rc geninfo_unexecuted_blocks=1 00:10:25.766 00:10:25.766 ' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.766 --rc genhtml_branch_coverage=1 00:10:25.766 --rc genhtml_function_coverage=1 00:10:25.766 --rc genhtml_legend=1 00:10:25.766 --rc geninfo_all_blocks=1 00:10:25.766 --rc geninfo_unexecuted_blocks=1 00:10:25.766 00:10:25.766 ' 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:25.766 04:06:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:25.766 04:06:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:25.766 04:06:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:25.766 04:06:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:25.766 04:06:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:25.766 04:06:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:25.766 04:06:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:25.766 04:06:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:25.766 04:06:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:25.766 04:06:27 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:25.766 04:06:27 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:25.766 04:06:27 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:25.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78015 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78015 00:10:25.766 04:06:27 -- common/autotest_common.sh@829 -- # '[' -z 78015 ']' 00:10:25.766 04:06:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.766 04:06:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.766 04:06:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.766 04:06:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.766 04:06:27 -- common/autotest_common.sh@10 -- # set +x 00:10:25.766 04:06:27 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:26.027 [2024-11-26 04:06:27.549360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:26.027 [2024-11-26 04:06:27.549481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78015 ] 00:10:26.027 [2024-11-26 04:06:27.701175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:26.027 [2024-11-26 04:06:27.743136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.027 [2024-11-26 04:06:27.743548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.027 [2024-11-26 04:06:27.743615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.599 04:06:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.599 04:06:28 -- common/autotest_common.sh@862 -- # return 0 00:10:26.599 04:06:28 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:10:26.860 Nvme0n1 00:10:26.860 04:06:28 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:26.860 04:06:28 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:27.121 request: 00:10:27.121 { 00:10:27.121 "filename": "non_existing_file", 00:10:27.121 "bdev_name": "Nvme0n1", 00:10:27.121 "method": "bdev_nvme_apply_firmware", 00:10:27.121 "req_id": 1 00:10:27.121 } 00:10:27.121 Got JSON-RPC error response 00:10:27.121 response: 00:10:27.121 { 00:10:27.121 "code": -32603, 00:10:27.121 "message": "open file failed." 00:10:27.121 } 00:10:27.121 04:06:28 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:27.121 04:06:28 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:27.121 04:06:28 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:27.388 04:06:28 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:27.388 04:06:28 -- nvme/nvme_rpc.sh@40 -- # killprocess 78015 00:10:27.388 04:06:28 -- common/autotest_common.sh@936 -- # '[' -z 78015 ']' 00:10:27.388 04:06:28 -- common/autotest_common.sh@940 -- # kill -0 78015 00:10:27.388 04:06:28 -- common/autotest_common.sh@941 -- # uname 00:10:27.388 04:06:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.388 04:06:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78015 00:10:27.388 killing process with pid 78015 00:10:27.388 04:06:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:27.388 04:06:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:27.388 04:06:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78015' 00:10:27.388 04:06:29 -- common/autotest_common.sh@955 -- # kill 78015 00:10:27.388 04:06:29 -- common/autotest_common.sh@960 -- # wait 78015 00:10:27.647 ************************************ 00:10:27.647 END TEST nvme_rpc 00:10:27.647 ************************************ 00:10:27.647 00:10:27.647 real 0m2.012s 00:10:27.647 user 0m3.804s 00:10:27.647 sys 0m0.520s 00:10:27.647 04:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:27.647 04:06:29 -- common/autotest_common.sh@10 -- # set +x 00:10:27.647 04:06:29 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:27.647 04:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:27.647 04:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.647 04:06:29 -- common/autotest_common.sh@10 -- # set +x 00:10:27.647 ************************************ 00:10:27.647 START TEST nvme_rpc_timeouts 00:10:27.647 ************************************ 00:10:27.647 04:06:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:27.908 * Looking for test storage... 00:10:27.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:27.908 04:06:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:27.908 04:06:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:27.908 04:06:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:27.908 04:06:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:27.908 04:06:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:27.908 04:06:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:27.908 04:06:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:27.908 04:06:29 -- scripts/common.sh@335 -- # IFS=.-: 00:10:27.908 04:06:29 -- scripts/common.sh@335 -- # read -ra ver1 00:10:27.908 04:06:29 -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.908 04:06:29 -- scripts/common.sh@336 -- # read -ra ver2 00:10:27.908 04:06:29 -- scripts/common.sh@337 -- # local 'op=<' 00:10:27.908 04:06:29 -- scripts/common.sh@339 -- # ver1_l=2 00:10:27.908 04:06:29 -- scripts/common.sh@340 -- # ver2_l=1 00:10:27.908 04:06:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:27.908 04:06:29 -- scripts/common.sh@343 -- # case "$op" in 00:10:27.908 04:06:29 -- scripts/common.sh@344 -- # : 1 00:10:27.908 04:06:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:27.908 04:06:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.908 04:06:29 -- scripts/common.sh@364 -- # decimal 1 00:10:27.908 04:06:29 -- scripts/common.sh@352 -- # local d=1 00:10:27.908 04:06:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.908 04:06:29 -- scripts/common.sh@354 -- # echo 1 00:10:27.908 04:06:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:27.908 04:06:29 -- scripts/common.sh@365 -- # decimal 2 00:10:27.908 04:06:29 -- scripts/common.sh@352 -- # local d=2 00:10:27.908 04:06:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.908 04:06:29 -- scripts/common.sh@354 -- # echo 2 00:10:27.908 04:06:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:27.908 04:06:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:27.908 04:06:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:27.908 04:06:29 -- scripts/common.sh@367 -- # return 0 00:10:27.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.908 04:06:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.908 04:06:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:27.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.908 --rc genhtml_branch_coverage=1 00:10:27.908 --rc genhtml_function_coverage=1 00:10:27.908 --rc genhtml_legend=1 00:10:27.908 --rc geninfo_all_blocks=1 00:10:27.908 --rc geninfo_unexecuted_blocks=1 00:10:27.908 00:10:27.908 ' 00:10:27.908 04:06:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:27.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.908 --rc genhtml_branch_coverage=1 00:10:27.908 --rc genhtml_function_coverage=1 00:10:27.908 --rc genhtml_legend=1 00:10:27.908 --rc geninfo_all_blocks=1 00:10:27.908 --rc geninfo_unexecuted_blocks=1 00:10:27.908 00:10:27.908 ' 00:10:27.908 04:06:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:27.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.908 --rc genhtml_branch_coverage=1 00:10:27.908 --rc genhtml_function_coverage=1 00:10:27.908 --rc genhtml_legend=1 00:10:27.908 --rc geninfo_all_blocks=1 00:10:27.908 --rc geninfo_unexecuted_blocks=1 00:10:27.908 00:10:27.908 ' 00:10:27.909 04:06:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:27.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.909 --rc genhtml_branch_coverage=1 00:10:27.909 --rc genhtml_function_coverage=1 00:10:27.909 --rc genhtml_legend=1 00:10:27.909 --rc geninfo_all_blocks=1 00:10:27.909 --rc geninfo_unexecuted_blocks=1 00:10:27.909 00:10:27.909 ' 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78069 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78069 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78100 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78100 00:10:27.909 04:06:29 -- common/autotest_common.sh@829 -- # '[' -z 78100 ']' 00:10:27.909 04:06:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.909 04:06:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.909 04:06:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.909 04:06:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.909 04:06:29 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:27.909 04:06:29 -- common/autotest_common.sh@10 -- # set +x 00:10:27.909 [2024-11-26 04:06:29.589929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:27.909 [2024-11-26 04:06:29.590229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78100 ] 00:10:28.233 [2024-11-26 04:06:29.741344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.233 [2024-11-26 04:06:29.773960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:28.233 [2024-11-26 04:06:29.774698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.233 [2024-11-26 04:06:29.774768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.804 04:06:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.804 04:06:30 -- common/autotest_common.sh@862 -- # return 0 00:10:28.804 04:06:30 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:28.805 Checking default timeout settings: 00:10:28.805 04:06:30 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:29.065 04:06:30 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:29.065 Making settings changes with rpc: 00:10:29.065 04:06:30 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:29.326 Check default vs. modified settings: 00:10:29.326 04:06:30 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:29.326 04:06:30 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:29.586 Setting action_on_timeout is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 Setting timeout_us is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:29.586 Setting timeout_admin_us is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78069 /tmp/settings_modified_78069 00:10:29.586 04:06:31 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78100 00:10:29.586 04:06:31 -- common/autotest_common.sh@936 -- # '[' -z 78100 ']' 00:10:29.586 04:06:31 -- common/autotest_common.sh@940 -- # kill -0 78100 00:10:29.586 04:06:31 -- common/autotest_common.sh@941 -- # uname 00:10:29.586 04:06:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.586 04:06:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78100 00:10:29.586 killing process with pid 78100 00:10:29.586 04:06:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:29.586 04:06:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:29.586 04:06:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78100' 00:10:29.586 04:06:31 -- common/autotest_common.sh@955 -- # kill 78100 00:10:29.586 04:06:31 -- common/autotest_common.sh@960 -- # wait 78100 00:10:29.846 RPC TIMEOUT SETTING TEST PASSED. 00:10:29.846 04:06:31 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:29.846 ************************************ 00:10:29.846 END TEST nvme_rpc_timeouts 00:10:29.846 ************************************ 00:10:29.846 00:10:29.846 real 0m2.147s 00:10:29.846 user 0m4.242s 00:10:29.846 sys 0m0.502s 00:10:29.846 04:06:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:29.846 04:06:31 -- common/autotest_common.sh@10 -- # set +x 00:10:29.846 04:06:31 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:10:29.846 04:06:31 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:10:29.846 04:06:31 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:29.846 04:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:29.846 04:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.846 04:06:31 -- common/autotest_common.sh@10 -- # set +x 00:10:29.846 ************************************ 00:10:29.846 START TEST nvme_xnvme 00:10:29.846 ************************************ 00:10:29.846 04:06:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:30.109 * Looking for test storage... 00:10:30.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:10:30.109 04:06:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:30.109 04:06:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:30.109 04:06:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:30.109 04:06:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:30.109 04:06:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:30.109 04:06:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:30.109 04:06:31 -- scripts/common.sh@335 -- # IFS=.-: 00:10:30.109 04:06:31 -- scripts/common.sh@335 -- # read -ra ver1 00:10:30.109 04:06:31 -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.109 04:06:31 -- scripts/common.sh@336 -- # read -ra ver2 00:10:30.109 04:06:31 -- scripts/common.sh@337 -- # local 'op=<' 00:10:30.109 04:06:31 -- scripts/common.sh@339 -- # ver1_l=2 00:10:30.109 04:06:31 -- scripts/common.sh@340 -- # ver2_l=1 00:10:30.109 04:06:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:30.109 04:06:31 -- scripts/common.sh@343 -- # case "$op" in 00:10:30.109 04:06:31 -- scripts/common.sh@344 -- # : 1 00:10:30.109 04:06:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:30.109 04:06:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.109 04:06:31 -- scripts/common.sh@364 -- # decimal 1 00:10:30.109 04:06:31 -- scripts/common.sh@352 -- # local d=1 00:10:30.109 04:06:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.109 04:06:31 -- scripts/common.sh@354 -- # echo 1 00:10:30.109 04:06:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:30.109 04:06:31 -- scripts/common.sh@365 -- # decimal 2 00:10:30.109 04:06:31 -- scripts/common.sh@352 -- # local d=2 00:10:30.109 04:06:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.109 04:06:31 -- scripts/common.sh@354 -- # echo 2 00:10:30.109 04:06:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:30.109 04:06:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:30.109 04:06:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:30.109 04:06:31 -- scripts/common.sh@367 -- # return 0 00:10:30.109 04:06:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:30.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.109 --rc genhtml_branch_coverage=1 00:10:30.109 --rc genhtml_function_coverage=1 00:10:30.109 --rc genhtml_legend=1 00:10:30.109 --rc geninfo_all_blocks=1 00:10:30.109 --rc geninfo_unexecuted_blocks=1 00:10:30.109 00:10:30.109 ' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:30.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.109 --rc genhtml_branch_coverage=1 00:10:30.109 --rc genhtml_function_coverage=1 00:10:30.109 --rc genhtml_legend=1 00:10:30.109 --rc geninfo_all_blocks=1 00:10:30.109 --rc geninfo_unexecuted_blocks=1 00:10:30.109 00:10:30.109 ' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:30.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.109 --rc genhtml_branch_coverage=1 00:10:30.109 --rc genhtml_function_coverage=1 00:10:30.109 --rc genhtml_legend=1 00:10:30.109 --rc geninfo_all_blocks=1 00:10:30.109 --rc geninfo_unexecuted_blocks=1 00:10:30.109 00:10:30.109 ' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:30.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.109 --rc genhtml_branch_coverage=1 00:10:30.109 --rc genhtml_function_coverage=1 00:10:30.109 --rc genhtml_legend=1 00:10:30.109 --rc geninfo_all_blocks=1 00:10:30.109 --rc geninfo_unexecuted_blocks=1 00:10:30.109 00:10:30.109 ' 00:10:30.109 04:06:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.109 04:06:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.109 04:06:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.109 04:06:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.109 04:06:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.109 04:06:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.109 04:06:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.109 04:06:31 -- paths/export.sh@5 -- # export PATH 00:10:30.109 04:06:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:10:30.109 04:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:30.109 04:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.109 04:06:31 -- common/autotest_common.sh@10 -- # set +x 00:10:30.109 ************************************ 00:10:30.109 START TEST xnvme_to_malloc_dd_copy 00:10:30.109 ************************************ 00:10:30.109 04:06:31 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:10:30.109 04:06:31 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:10:30.109 04:06:31 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:10:30.109 04:06:31 -- dd/common.sh@191 -- # return 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@18 -- # local io 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:10:30.109 04:06:31 -- xnvme/xnvme.sh@42 -- # gen_conf 00:10:30.109 04:06:31 -- dd/common.sh@31 -- # xtrace_disable 00:10:30.109 04:06:31 -- common/autotest_common.sh@10 -- # set +x 00:10:30.109 { 00:10:30.109 "subsystems": [ 00:10:30.109 { 00:10:30.109 "subsystem": "bdev", 00:10:30.109 "config": [ 00:10:30.109 { 00:10:30.109 "params": { 00:10:30.109 "block_size": 512, 00:10:30.109 "num_blocks": 2097152, 00:10:30.109 "name": "malloc0" 00:10:30.109 }, 00:10:30.109 "method": "bdev_malloc_create" 00:10:30.109 }, 00:10:30.109 { 00:10:30.109 "params": { 00:10:30.109 "io_mechanism": "libaio", 00:10:30.109 "filename": "/dev/nullb0", 00:10:30.109 "name": "null0" 00:10:30.109 }, 00:10:30.109 "method": "bdev_xnvme_create" 00:10:30.109 }, 00:10:30.109 { 00:10:30.109 "method": "bdev_wait_for_examine" 00:10:30.109 } 00:10:30.109 ] 00:10:30.109 } 00:10:30.109 ] 00:10:30.109 } 00:10:30.109 [2024-11-26 04:06:31.780397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.109 [2024-11-26 04:06:31.780531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78217 ] 00:10:30.370 [2024-11-26 04:06:31.926488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.370 [2024-11-26 04:06:31.957431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.751  [2024-11-26T04:06:34.459Z] Copying: 235/1024 [MB] (235 MBps) [2024-11-26T04:06:35.397Z] Copying: 471/1024 [MB] (236 MBps) [2024-11-26T04:06:36.332Z] Copying: 711/1024 [MB] (239 MBps) [2024-11-26T04:06:36.591Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:10:34.823 00:10:34.823 04:06:36 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:10:34.823 04:06:36 -- xnvme/xnvme.sh@47 -- # gen_conf 00:10:34.823 04:06:36 -- dd/common.sh@31 -- # xtrace_disable 00:10:34.823 04:06:36 -- common/autotest_common.sh@10 -- # set +x 00:10:34.823 { 00:10:34.823 "subsystems": [ 00:10:34.823 { 00:10:34.823 "subsystem": "bdev", 00:10:34.823 "config": [ 00:10:34.823 { 00:10:34.823 "params": { 00:10:34.823 "block_size": 512, 00:10:34.823 "num_blocks": 2097152, 00:10:34.823 "name": "malloc0" 00:10:34.823 }, 00:10:34.823 "method": "bdev_malloc_create" 00:10:34.823 }, 00:10:34.823 { 00:10:34.823 "params": { 00:10:34.823 "io_mechanism": "libaio", 00:10:34.823 "filename": "/dev/nullb0", 00:10:34.823 "name": "null0" 00:10:34.823 }, 00:10:34.823 "method": "bdev_xnvme_create" 00:10:34.823 }, 00:10:34.823 { 00:10:34.823 "method": "bdev_wait_for_examine" 00:10:34.823 } 00:10:34.823 ] 00:10:34.823 } 00:10:34.823 ] 00:10:34.823 } 00:10:34.823 [2024-11-26 04:06:36.583061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.823 [2024-11-26 04:06:36.583171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:10:35.082 [2024-11-26 04:06:36.730288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.082 [2024-11-26 04:06:36.759559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.457  [2024-11-26T04:06:39.164Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-26T04:06:40.106Z] Copying: 603/1024 [MB] (282 MBps) [2024-11-26T04:06:41.047Z] Copying: 843/1024 [MB] (239 MBps) [2024-11-26T04:06:41.307Z] Copying: 1024/1024 [MB] (average 272 MBps) 00:10:39.539 00:10:39.539 04:06:41 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:10:39.539 04:06:41 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:10:39.539 04:06:41 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:10:39.539 04:06:41 -- xnvme/xnvme.sh@42 -- # gen_conf 00:10:39.539 04:06:41 -- dd/common.sh@31 -- # xtrace_disable 00:10:39.539 04:06:41 -- common/autotest_common.sh@10 -- # set +x 00:10:39.539 { 00:10:39.540 "subsystems": [ 00:10:39.540 { 00:10:39.540 "subsystem": "bdev", 00:10:39.540 "config": [ 00:10:39.540 { 00:10:39.540 "params": { 00:10:39.540 "block_size": 512, 00:10:39.540 "num_blocks": 2097152, 00:10:39.540 "name": "malloc0" 00:10:39.540 }, 00:10:39.540 "method": "bdev_malloc_create" 00:10:39.540 }, 00:10:39.540 { 00:10:39.540 "params": { 00:10:39.540 "io_mechanism": "io_uring", 00:10:39.540 "filename": "/dev/nullb0", 00:10:39.540 "name": "null0" 00:10:39.540 }, 00:10:39.540 "method": "bdev_xnvme_create" 00:10:39.540 }, 00:10:39.540 { 00:10:39.540 "method": "bdev_wait_for_examine" 00:10:39.540 } 00:10:39.540 ] 00:10:39.540 } 00:10:39.540 ] 00:10:39.540 } 00:10:39.540 [2024-11-26 04:06:41.178787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.540 [2024-11-26 04:06:41.178889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78345 ] 00:10:39.799 [2024-11-26 04:06:41.328743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.799 [2024-11-26 04:06:41.359953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.181  [2024-11-26T04:06:43.888Z] Copying: 245/1024 [MB] (245 MBps) [2024-11-26T04:06:44.850Z] Copying: 489/1024 [MB] (244 MBps) [2024-11-26T04:06:45.416Z] Copying: 800/1024 [MB] (310 MBps) [2024-11-26T04:06:45.674Z] Copying: 1024/1024 [MB] (average 277 MBps) 00:10:43.906 00:10:43.906 04:06:45 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:10:43.906 04:06:45 -- xnvme/xnvme.sh@47 -- # gen_conf 00:10:43.906 04:06:45 -- dd/common.sh@31 -- # xtrace_disable 00:10:43.906 04:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:43.906 { 00:10:43.906 "subsystems": [ 00:10:43.906 { 00:10:43.906 "subsystem": "bdev", 00:10:43.906 "config": [ 00:10:43.906 { 00:10:43.906 "params": { 00:10:43.906 "block_size": 512, 00:10:43.906 "num_blocks": 2097152, 00:10:43.906 "name": "malloc0" 00:10:43.906 }, 00:10:43.906 "method": "bdev_malloc_create" 00:10:43.906 }, 00:10:43.906 { 00:10:43.906 "params": { 00:10:43.906 "io_mechanism": "io_uring", 00:10:43.906 "filename": "/dev/nullb0", 00:10:43.906 "name": "null0" 00:10:43.906 }, 00:10:43.906 "method": "bdev_xnvme_create" 00:10:43.906 }, 00:10:43.906 { 00:10:43.906 "method": "bdev_wait_for_examine" 00:10:43.906 } 00:10:43.906 ] 00:10:43.906 } 00:10:43.906 ] 00:10:43.906 } 00:10:43.906 [2024-11-26 04:06:45.666851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:43.906 [2024-11-26 04:06:45.666959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78399 ] 00:10:44.164 [2024-11-26 04:06:45.820117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.164 [2024-11-26 04:06:45.851710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.547  [2024-11-26T04:06:48.374Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-26T04:06:49.308Z] Copying: 494/1024 [MB] (247 MBps) [2024-11-26T04:06:49.875Z] Copying: 805/1024 [MB] (310 MBps) [2024-11-26T04:06:50.134Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:10:48.367 00:10:48.367 04:06:50 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:10:48.367 04:06:50 -- dd/common.sh@195 -- # modprobe -r null_blk 00:10:48.367 00:10:48.367 real 0m18.378s 00:10:48.367 user 0m15.334s 00:10:48.367 sys 0m2.531s 00:10:48.367 04:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:48.367 ************************************ 00:10:48.367 END TEST xnvme_to_malloc_dd_copy 00:10:48.367 ************************************ 00:10:48.367 04:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:10:48.627 04:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.627 04:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.627 04:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:48.627 ************************************ 00:10:48.627 START TEST xnvme_bdevperf 00:10:48.627 ************************************ 00:10:48.627 04:06:50 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:10:48.627 04:06:50 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:10:48.627 04:06:50 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:10:48.627 04:06:50 -- dd/common.sh@191 -- # return 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@60 -- # local io 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@74 -- # gen_conf 00:10:48.627 04:06:50 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:10:48.627 04:06:50 -- dd/common.sh@31 -- # xtrace_disable 00:10:48.627 04:06:50 -- common/autotest_common.sh@10 -- # set +x 00:10:48.628 { 00:10:48.628 "subsystems": [ 00:10:48.628 { 00:10:48.628 "subsystem": "bdev", 00:10:48.628 "config": [ 00:10:48.628 { 00:10:48.628 "params": { 00:10:48.628 "io_mechanism": "libaio", 00:10:48.628 "filename": "/dev/nullb0", 00:10:48.628 "name": "null0" 00:10:48.628 }, 00:10:48.628 "method": "bdev_xnvme_create" 00:10:48.628 }, 00:10:48.628 { 00:10:48.628 "method": "bdev_wait_for_examine" 00:10:48.628 } 00:10:48.628 ] 00:10:48.628 } 00:10:48.628 ] 00:10:48.628 } 00:10:48.628 [2024-11-26 04:06:50.217966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.628 [2024-11-26 04:06:50.218073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78479 ] 00:10:48.628 [2024-11-26 04:06:50.368562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.889 [2024-11-26 04:06:50.400466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.889 Running I/O for 5 seconds... 00:10:54.164 00:10:54.164 Latency(us) 00:10:54.164 [2024-11-26T04:06:55.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.164 [2024-11-26T04:06:55.932Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:10:54.164 null0 : 5.00 191645.74 748.62 0.00 0.00 331.60 115.00 1310.72 00:10:54.164 [2024-11-26T04:06:55.932Z] =================================================================================================================== 00:10:54.164 [2024-11-26T04:06:55.932Z] Total : 191645.74 748.62 0.00 0.00 331.60 115.00 1310.72 00:10:54.164 04:06:55 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:10:54.164 04:06:55 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:10:54.164 04:06:55 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:10:54.164 04:06:55 -- xnvme/xnvme.sh@74 -- # gen_conf 00:10:54.164 04:06:55 -- dd/common.sh@31 -- # xtrace_disable 00:10:54.164 04:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:54.164 { 00:10:54.164 "subsystems": [ 00:10:54.164 { 00:10:54.164 "subsystem": "bdev", 00:10:54.164 "config": [ 00:10:54.164 { 00:10:54.164 "params": { 00:10:54.164 "io_mechanism": "io_uring", 00:10:54.164 "filename": "/dev/nullb0", 00:10:54.164 "name": "null0" 00:10:54.164 }, 00:10:54.164 "method": "bdev_xnvme_create" 00:10:54.164 }, 00:10:54.164 { 00:10:54.164 "method": "bdev_wait_for_examine" 00:10:54.164 } 00:10:54.164 ] 00:10:54.164 } 00:10:54.164 ] 00:10:54.164 } 00:10:54.164 [2024-11-26 04:06:55.685045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.164 [2024-11-26 04:06:55.685164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78549 ] 00:10:54.164 [2024-11-26 04:06:55.830937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.164 [2024-11-26 04:06:55.858889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.164 Running I/O for 5 seconds... 00:10:59.430 00:10:59.430 Latency(us) 00:10:59.430 [2024-11-26T04:07:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.430 [2024-11-26T04:07:01.198Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:10:59.430 null0 : 5.00 244334.40 954.43 0.00 0.00 259.98 152.81 775.09 00:10:59.430 [2024-11-26T04:07:01.198Z] =================================================================================================================== 00:10:59.430 [2024-11-26T04:07:01.198Z] Total : 244334.40 954.43 0.00 0.00 259.98 152.81 775.09 00:10:59.430 04:07:01 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:10:59.430 04:07:01 -- dd/common.sh@195 -- # modprobe -r null_blk 00:10:59.430 00:10:59.430 real 0m10.948s 00:10:59.430 user 0m8.559s 00:10:59.430 sys 0m2.158s 00:10:59.430 ************************************ 00:10:59.430 04:07:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.430 04:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 END TEST xnvme_bdevperf 00:10:59.430 ************************************ 00:10:59.430 00:10:59.430 real 0m29.575s 00:10:59.430 user 0m24.008s 00:10:59.430 sys 0m4.800s 00:10:59.430 04:07:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:59.430 ************************************ 00:10:59.430 END TEST nvme_xnvme 00:10:59.430 ************************************ 00:10:59.430 04:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 04:07:01 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:10:59.430 04:07:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.430 04:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.430 04:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 ************************************ 00:10:59.430 START TEST blockdev_xnvme 00:10:59.430 ************************************ 00:10:59.430 04:07:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:10:59.689 * Looking for test storage... 00:10:59.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:59.689 04:07:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:59.689 04:07:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:59.689 04:07:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:59.689 04:07:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:59.689 04:07:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:59.689 04:07:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:59.689 04:07:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:59.689 04:07:01 -- scripts/common.sh@335 -- # IFS=.-: 00:10:59.689 04:07:01 -- scripts/common.sh@335 -- # read -ra ver1 00:10:59.689 04:07:01 -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.689 04:07:01 -- scripts/common.sh@336 -- # read -ra ver2 00:10:59.689 04:07:01 -- scripts/common.sh@337 -- # local 'op=<' 00:10:59.689 04:07:01 -- scripts/common.sh@339 -- # ver1_l=2 00:10:59.689 04:07:01 -- scripts/common.sh@340 -- # ver2_l=1 00:10:59.689 04:07:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:59.689 04:07:01 -- scripts/common.sh@343 -- # case "$op" in 00:10:59.689 04:07:01 -- scripts/common.sh@344 -- # : 1 00:10:59.689 04:07:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:59.689 04:07:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.689 04:07:01 -- scripts/common.sh@364 -- # decimal 1 00:10:59.689 04:07:01 -- scripts/common.sh@352 -- # local d=1 00:10:59.689 04:07:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.689 04:07:01 -- scripts/common.sh@354 -- # echo 1 00:10:59.689 04:07:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:59.690 04:07:01 -- scripts/common.sh@365 -- # decimal 2 00:10:59.690 04:07:01 -- scripts/common.sh@352 -- # local d=2 00:10:59.690 04:07:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.690 04:07:01 -- scripts/common.sh@354 -- # echo 2 00:10:59.690 04:07:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:59.690 04:07:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:59.690 04:07:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:59.690 04:07:01 -- scripts/common.sh@367 -- # return 0 00:10:59.690 04:07:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.690 04:07:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:59.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.690 --rc genhtml_branch_coverage=1 00:10:59.690 --rc genhtml_function_coverage=1 00:10:59.690 --rc genhtml_legend=1 00:10:59.690 --rc geninfo_all_blocks=1 00:10:59.690 --rc geninfo_unexecuted_blocks=1 00:10:59.690 00:10:59.690 ' 00:10:59.690 04:07:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:59.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.690 --rc genhtml_branch_coverage=1 00:10:59.690 --rc genhtml_function_coverage=1 00:10:59.690 --rc genhtml_legend=1 00:10:59.690 --rc geninfo_all_blocks=1 00:10:59.690 --rc geninfo_unexecuted_blocks=1 00:10:59.690 00:10:59.690 ' 00:10:59.690 04:07:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:59.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.690 --rc genhtml_branch_coverage=1 00:10:59.690 --rc genhtml_function_coverage=1 00:10:59.690 --rc genhtml_legend=1 00:10:59.690 --rc geninfo_all_blocks=1 00:10:59.690 --rc geninfo_unexecuted_blocks=1 00:10:59.690 00:10:59.690 ' 00:10:59.690 04:07:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:59.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.690 --rc genhtml_branch_coverage=1 00:10:59.690 --rc genhtml_function_coverage=1 00:10:59.690 --rc genhtml_legend=1 00:10:59.690 --rc geninfo_all_blocks=1 00:10:59.690 --rc geninfo_unexecuted_blocks=1 00:10:59.690 00:10:59.690 ' 00:10:59.690 04:07:01 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:59.690 04:07:01 -- bdev/nbd_common.sh@6 -- # set -e 00:10:59.690 04:07:01 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:59.690 04:07:01 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:59.690 04:07:01 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:59.690 04:07:01 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:59.690 04:07:01 -- bdev/blockdev.sh@18 -- # : 00:10:59.690 04:07:01 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:59.690 04:07:01 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:59.690 04:07:01 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:59.690 04:07:01 -- bdev/blockdev.sh@672 -- # uname -s 00:10:59.690 04:07:01 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:59.690 04:07:01 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:59.690 04:07:01 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:10:59.690 04:07:01 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:59.690 04:07:01 -- bdev/blockdev.sh@682 -- # dek= 00:10:59.690 04:07:01 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:59.690 04:07:01 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:59.690 04:07:01 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:59.690 04:07:01 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:10:59.690 04:07:01 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:10:59.690 04:07:01 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:59.690 04:07:01 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=78679 00:10:59.690 04:07:01 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:59.690 04:07:01 -- bdev/blockdev.sh@47 -- # waitforlisten 78679 00:10:59.690 04:07:01 -- common/autotest_common.sh@829 -- # '[' -z 78679 ']' 00:10:59.690 04:07:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.690 04:07:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.690 04:07:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.690 04:07:01 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:59.690 04:07:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.690 04:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:59.690 [2024-11-26 04:07:01.369882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:59.690 [2024-11-26 04:07:01.369992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78679 ] 00:10:59.949 [2024-11-26 04:07:01.514182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.949 [2024-11-26 04:07:01.542441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:59.949 [2024-11-26 04:07:01.542626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.515 04:07:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.515 04:07:02 -- common/autotest_common.sh@862 -- # return 0 00:11:00.515 04:07:02 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:00.515 04:07:02 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:00.515 04:07:02 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:00.515 04:07:02 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:00.515 04:07:02 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:01.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:01.081 Waiting for block devices as requested 00:11:01.081 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.081 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.081 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.339 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.621 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:06.621 04:07:07 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:06.621 04:07:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:06.621 04:07:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:06.621 04:07:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:06.621 04:07:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:06.621 04:07:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:06.621 04:07:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:06.621 04:07:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:06.621 04:07:07 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:06.621 04:07:07 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:06.621 04:07:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.621 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:11:06.621 04:07:07 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:06.621 nvme0n1 00:11:06.621 nvme1n1 00:11:06.621 nvme1n2 00:11:06.621 nvme1n3 00:11:06.621 nvme2n1 00:11:06.621 nvme3n1 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:06.622 04:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@738 -- # cat 00:11:06.622 04:07:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:06.622 04:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:06.622 04:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:06.622 04:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:06.622 04:07:08 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:06.622 04:07:08 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:06.622 04:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.622 04:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.622 04:07:08 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:06.622 04:07:08 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:06.622 04:07:08 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6e1d3a69-270f-4d07-9ce6-21a11e956be6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6e1d3a69-270f-4d07-9ce6-21a11e956be6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0ab80e5f-d412-44ec-aaa2-a891fda1fcc9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0ab80e5f-d412-44ec-aaa2-a891fda1fcc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "851f987c-35c3-4800-bd54-50fc70b332b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "851f987c-35c3-4800-bd54-50fc70b332b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "538df82b-3013-43e3-a33b-2afcc8cb51ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "538df82b-3013-43e3-a33b-2afcc8cb51ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d741db06-c29a-4103-b964-b40d0a9d8e7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d741db06-c29a-4103-b964-b40d0a9d8e7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df36bd35-a8a0-4d00-b646-851f1824e0f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df36bd35-a8a0-4d00-b646-851f1824e0f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:06.622 04:07:08 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:06.622 04:07:08 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:06.622 04:07:08 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:06.622 04:07:08 -- bdev/blockdev.sh@752 -- # killprocess 78679 00:11:06.622 04:07:08 -- common/autotest_common.sh@936 -- # '[' -z 78679 ']' 00:11:06.622 04:07:08 -- common/autotest_common.sh@940 -- # kill -0 78679 00:11:06.622 04:07:08 -- common/autotest_common.sh@941 -- # uname 00:11:06.622 04:07:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:06.622 04:07:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78679 00:11:06.622 killing process with pid 78679 00:11:06.622 04:07:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:06.622 04:07:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:06.622 04:07:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78679' 00:11:06.622 04:07:08 -- common/autotest_common.sh@955 -- # kill 78679 00:11:06.622 04:07:08 -- common/autotest_common.sh@960 -- # wait 78679 00:11:06.622 04:07:08 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:06.622 04:07:08 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:06.622 04:07:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:06.622 04:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.622 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:06.881 ************************************ 00:11:06.881 START TEST bdev_hello_world 00:11:06.881 ************************************ 00:11:06.881 04:07:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:06.881 [2024-11-26 04:07:08.437914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.881 [2024-11-26 04:07:08.438006] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79048 ] 00:11:06.881 [2024-11-26 04:07:08.577174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.881 [2024-11-26 04:07:08.606620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.139 [2024-11-26 04:07:08.762757] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:07.139 [2024-11-26 04:07:08.762802] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:07.139 [2024-11-26 04:07:08.762814] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:07.139 [2024-11-26 04:07:08.764275] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:07.139 [2024-11-26 04:07:08.764583] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:07.139 [2024-11-26 04:07:08.764600] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:07.139 [2024-11-26 04:07:08.764908] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:07.139 00:11:07.139 [2024-11-26 04:07:08.764948] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:07.139 ************************************ 00:11:07.139 END TEST bdev_hello_world 00:11:07.139 ************************************ 00:11:07.139 00:11:07.139 real 0m0.506s 00:11:07.139 user 0m0.266s 00:11:07.139 sys 0m0.134s 00:11:07.139 04:07:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:07.139 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 04:07:08 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:07.399 04:07:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:07.399 04:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.399 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 ************************************ 00:11:07.399 START TEST bdev_bounds 00:11:07.399 ************************************ 00:11:07.399 04:07:08 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:07.399 Process bdevio pid: 79074 00:11:07.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.399 04:07:08 -- bdev/blockdev.sh@288 -- # bdevio_pid=79074 00:11:07.399 04:07:08 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:07.399 04:07:08 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 79074' 00:11:07.399 04:07:08 -- bdev/blockdev.sh@291 -- # waitforlisten 79074 00:11:07.399 04:07:08 -- common/autotest_common.sh@829 -- # '[' -z 79074 ']' 00:11:07.399 04:07:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.399 04:07:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.399 04:07:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.399 04:07:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.399 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 04:07:08 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:07.399 [2024-11-26 04:07:09.006728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.399 [2024-11-26 04:07:09.006848] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79074 ] 00:11:07.399 [2024-11-26 04:07:09.154787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.664 [2024-11-26 04:07:09.187394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.664 [2024-11-26 04:07:09.187674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.664 [2024-11-26 04:07:09.187708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.236 04:07:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.236 04:07:09 -- common/autotest_common.sh@862 -- # return 0 00:11:08.236 04:07:09 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:08.236 I/O targets: 00:11:08.236 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:08.236 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:08.236 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:08.236 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:08.236 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:08.236 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:08.236 00:11:08.236 00:11:08.236 CUnit - A unit testing framework for C - Version 2.1-3 00:11:08.236 http://cunit.sourceforge.net/ 00:11:08.236 00:11:08.236 00:11:08.236 Suite: bdevio tests on: nvme3n1 00:11:08.236 Test: blockdev write read block ...passed 00:11:08.236 Test: blockdev write zeroes read block ...passed 00:11:08.236 Test: blockdev write zeroes read no split ...passed 00:11:08.236 Test: blockdev write zeroes read split ...passed 00:11:08.236 Test: blockdev write zeroes read split partial ...passed 00:11:08.236 Test: blockdev reset ...passed 00:11:08.236 Test: blockdev write read 8 blocks ...passed 00:11:08.236 Test: blockdev write read size > 128k ...passed 00:11:08.236 Test: blockdev write read invalid size ...passed 00:11:08.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.236 Test: blockdev write read max offset ...passed 00:11:08.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.236 Test: blockdev writev readv 8 blocks ...passed 00:11:08.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.236 Test: blockdev writev readv block ...passed 00:11:08.236 Test: blockdev writev readv size > 128k ...passed 00:11:08.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.236 Test: blockdev comparev and writev ...passed 00:11:08.236 Test: blockdev nvme passthru rw ...passed 00:11:08.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.236 Test: blockdev nvme admin passthru ...passed 00:11:08.236 Test: blockdev copy ...passed 00:11:08.236 Suite: bdevio tests on: nvme2n1 00:11:08.236 Test: blockdev write read block ...passed 00:11:08.236 Test: blockdev write zeroes read block ...passed 00:11:08.236 Test: blockdev write zeroes read no split ...passed 00:11:08.236 Test: blockdev write zeroes read split ...passed 00:11:08.236 Test: blockdev write zeroes read split partial ...passed 00:11:08.236 Test: blockdev reset ...passed 00:11:08.236 Test: blockdev write read 8 blocks ...passed 00:11:08.236 Test: blockdev write read size > 128k ...passed 00:11:08.236 Test: blockdev write read invalid size ...passed 00:11:08.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.236 Test: blockdev write read max offset ...passed 00:11:08.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.236 Test: blockdev writev readv 8 blocks ...passed 00:11:08.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.236 Test: blockdev writev readv block ...passed 00:11:08.236 Test: blockdev writev readv size > 128k ...passed 00:11:08.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.236 Test: blockdev comparev and writev ...passed 00:11:08.236 Test: blockdev nvme passthru rw ...passed 00:11:08.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.236 Test: blockdev nvme admin passthru ...passed 00:11:08.236 Test: blockdev copy ...passed 00:11:08.236 Suite: bdevio tests on: nvme1n3 00:11:08.236 Test: blockdev write read block ...passed 00:11:08.237 Test: blockdev write zeroes read block ...passed 00:11:08.237 Test: blockdev write zeroes read no split ...passed 00:11:08.237 Test: blockdev write zeroes read split ...passed 00:11:08.237 Test: blockdev write zeroes read split partial ...passed 00:11:08.237 Test: blockdev reset ...passed 00:11:08.237 Test: blockdev write read 8 blocks ...passed 00:11:08.237 Test: blockdev write read size > 128k ...passed 00:11:08.237 Test: blockdev write read invalid size ...passed 00:11:08.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.237 Test: blockdev write read max offset ...passed 00:11:08.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.237 Test: blockdev writev readv 8 blocks ...passed 00:11:08.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.237 Test: blockdev writev readv block ...passed 00:11:08.237 Test: blockdev writev readv size > 128k ...passed 00:11:08.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.237 Test: blockdev comparev and writev ...passed 00:11:08.237 Test: blockdev nvme passthru rw ...passed 00:11:08.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.237 Test: blockdev nvme admin passthru ...passed 00:11:08.237 Test: blockdev copy ...passed 00:11:08.237 Suite: bdevio tests on: nvme1n2 00:11:08.237 Test: blockdev write read block ...passed 00:11:08.237 Test: blockdev write zeroes read block ...passed 00:11:08.237 Test: blockdev write zeroes read no split ...passed 00:11:08.237 Test: blockdev write zeroes read split ...passed 00:11:08.237 Test: blockdev write zeroes read split partial ...passed 00:11:08.237 Test: blockdev reset ...passed 00:11:08.237 Test: blockdev write read 8 blocks ...passed 00:11:08.237 Test: blockdev write read size > 128k ...passed 00:11:08.237 Test: blockdev write read invalid size ...passed 00:11:08.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.237 Test: blockdev write read max offset ...passed 00:11:08.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.237 Test: blockdev writev readv 8 blocks ...passed 00:11:08.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.237 Test: blockdev writev readv block ...passed 00:11:08.237 Test: blockdev writev readv size > 128k ...passed 00:11:08.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.237 Test: blockdev comparev and writev ...passed 00:11:08.237 Test: blockdev nvme passthru rw ...passed 00:11:08.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.237 Test: blockdev nvme admin passthru ...passed 00:11:08.237 Test: blockdev copy ...passed 00:11:08.237 Suite: bdevio tests on: nvme1n1 00:11:08.237 Test: blockdev write read block ...passed 00:11:08.237 Test: blockdev write zeroes read block ...passed 00:11:08.237 Test: blockdev write zeroes read no split ...passed 00:11:08.237 Test: blockdev write zeroes read split ...passed 00:11:08.237 Test: blockdev write zeroes read split partial ...passed 00:11:08.237 Test: blockdev reset ...passed 00:11:08.498 Test: blockdev write read 8 blocks ...passed 00:11:08.498 Test: blockdev write read size > 128k ...passed 00:11:08.498 Test: blockdev write read invalid size ...passed 00:11:08.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.498 Test: blockdev write read max offset ...passed 00:11:08.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.498 Test: blockdev writev readv 8 blocks ...passed 00:11:08.498 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.498 Test: blockdev writev readv block ...passed 00:11:08.498 Test: blockdev writev readv size > 128k ...passed 00:11:08.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.498 Test: blockdev comparev and writev ...passed 00:11:08.498 Test: blockdev nvme passthru rw ...passed 00:11:08.498 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.498 Test: blockdev nvme admin passthru ...passed 00:11:08.498 Test: blockdev copy ...passed 00:11:08.498 Suite: bdevio tests on: nvme0n1 00:11:08.498 Test: blockdev write read block ...passed 00:11:08.498 Test: blockdev write zeroes read block ...passed 00:11:08.498 Test: blockdev write zeroes read no split ...passed 00:11:08.498 Test: blockdev write zeroes read split ...passed 00:11:08.498 Test: blockdev write zeroes read split partial ...passed 00:11:08.498 Test: blockdev reset ...passed 00:11:08.498 Test: blockdev write read 8 blocks ...passed 00:11:08.498 Test: blockdev write read size > 128k ...passed 00:11:08.498 Test: blockdev write read invalid size ...passed 00:11:08.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:08.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:08.498 Test: blockdev write read max offset ...passed 00:11:08.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:08.498 Test: blockdev writev readv 8 blocks ...passed 00:11:08.498 Test: blockdev writev readv 30 x 1block ...passed 00:11:08.498 Test: blockdev writev readv block ...passed 00:11:08.498 Test: blockdev writev readv size > 128k ...passed 00:11:08.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:08.498 Test: blockdev comparev and writev ...passed 00:11:08.498 Test: blockdev nvme passthru rw ...passed 00:11:08.498 Test: blockdev nvme passthru vendor specific ...passed 00:11:08.498 Test: blockdev nvme admin passthru ...passed 00:11:08.498 Test: blockdev copy ...passed 00:11:08.498 00:11:08.498 Run Summary: Type Total Ran Passed Failed Inactive 00:11:08.498 suites 6 6 n/a 0 0 00:11:08.498 tests 138 138 138 0 0 00:11:08.498 asserts 780 780 780 0 n/a 00:11:08.498 00:11:08.498 Elapsed time = 0.237 seconds 00:11:08.498 0 00:11:08.499 04:07:10 -- bdev/blockdev.sh@293 -- # killprocess 79074 00:11:08.499 04:07:10 -- common/autotest_common.sh@936 -- # '[' -z 79074 ']' 00:11:08.499 04:07:10 -- common/autotest_common.sh@940 -- # kill -0 79074 00:11:08.499 04:07:10 -- common/autotest_common.sh@941 -- # uname 00:11:08.499 04:07:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.499 04:07:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79074 00:11:08.499 04:07:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.499 04:07:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.499 04:07:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79074' 00:11:08.499 killing process with pid 79074 00:11:08.499 04:07:10 -- common/autotest_common.sh@955 -- # kill 79074 00:11:08.499 04:07:10 -- common/autotest_common.sh@960 -- # wait 79074 00:11:08.499 04:07:10 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:08.499 00:11:08.499 real 0m1.261s 00:11:08.499 user 0m3.179s 00:11:08.499 sys 0m0.250s 00:11:08.499 04:07:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:08.499 04:07:10 -- common/autotest_common.sh@10 -- # set +x 00:11:08.499 ************************************ 00:11:08.499 END TEST bdev_bounds 00:11:08.499 ************************************ 00:11:08.499 04:07:10 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:08.499 04:07:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:08.499 04:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.499 04:07:10 -- common/autotest_common.sh@10 -- # set +x 00:11:08.760 ************************************ 00:11:08.760 START TEST bdev_nbd 00:11:08.760 ************************************ 00:11:08.760 04:07:10 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:08.760 04:07:10 -- bdev/blockdev.sh@298 -- # uname -s 00:11:08.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:08.760 04:07:10 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:08.760 04:07:10 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.760 04:07:10 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:08.760 04:07:10 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:08.760 04:07:10 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:08.760 04:07:10 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:08.760 04:07:10 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:08.760 04:07:10 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:08.760 04:07:10 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:08.760 04:07:10 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:08.760 04:07:10 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:08.760 04:07:10 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:08.760 04:07:10 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:08.760 04:07:10 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:08.760 04:07:10 -- bdev/blockdev.sh@316 -- # nbd_pid=79117 00:11:08.760 04:07:10 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:08.760 04:07:10 -- bdev/blockdev.sh@318 -- # waitforlisten 79117 /var/tmp/spdk-nbd.sock 00:11:08.760 04:07:10 -- common/autotest_common.sh@829 -- # '[' -z 79117 ']' 00:11:08.760 04:07:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:08.760 04:07:10 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:08.760 04:07:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.760 04:07:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:08.760 04:07:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.760 04:07:10 -- common/autotest_common.sh@10 -- # set +x 00:11:08.760 [2024-11-26 04:07:10.331161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.760 [2024-11-26 04:07:10.331275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.760 [2024-11-26 04:07:10.481770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.760 [2024-11-26 04:07:10.514578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.700 04:07:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.700 04:07:11 -- common/autotest_common.sh@862 -- # return 0 00:11:09.700 04:07:11 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@24 -- # local i 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:09.700 04:07:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:09.700 04:07:11 -- common/autotest_common.sh@867 -- # local i 00:11:09.700 04:07:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:09.700 04:07:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:09.700 04:07:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:09.700 04:07:11 -- common/autotest_common.sh@871 -- # break 00:11:09.700 04:07:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:09.700 04:07:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:09.700 04:07:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.700 1+0 records in 00:11:09.700 1+0 records out 00:11:09.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522153 s, 7.8 MB/s 00:11:09.700 04:07:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.700 04:07:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:09.700 04:07:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.700 04:07:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:09.700 04:07:11 -- common/autotest_common.sh@887 -- # return 0 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:09.700 04:07:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:09.959 04:07:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:09.959 04:07:11 -- common/autotest_common.sh@867 -- # local i 00:11:09.959 04:07:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:09.959 04:07:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:09.959 04:07:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:09.959 04:07:11 -- common/autotest_common.sh@871 -- # break 00:11:09.959 04:07:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:09.959 04:07:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:09.959 04:07:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.959 1+0 records in 00:11:09.959 1+0 records out 00:11:09.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067449 s, 6.1 MB/s 00:11:09.959 04:07:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.959 04:07:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:09.959 04:07:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.959 04:07:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:09.959 04:07:11 -- common/autotest_common.sh@887 -- # return 0 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:09.959 04:07:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:10.218 04:07:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:10.218 04:07:11 -- common/autotest_common.sh@867 -- # local i 00:11:10.218 04:07:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:10.218 04:07:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:10.218 04:07:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:10.218 04:07:11 -- common/autotest_common.sh@871 -- # break 00:11:10.218 04:07:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:10.218 04:07:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:10.218 04:07:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.218 1+0 records in 00:11:10.218 1+0 records out 00:11:10.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535249 s, 7.7 MB/s 00:11:10.218 04:07:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.218 04:07:11 -- common/autotest_common.sh@884 -- # size=4096 00:11:10.218 04:07:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.218 04:07:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:10.218 04:07:11 -- common/autotest_common.sh@887 -- # return 0 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:10.218 04:07:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:10.477 04:07:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:10.477 04:07:12 -- common/autotest_common.sh@867 -- # local i 00:11:10.477 04:07:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:10.477 04:07:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:10.477 04:07:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:10.477 04:07:12 -- common/autotest_common.sh@871 -- # break 00:11:10.477 04:07:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:10.477 04:07:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:10.477 04:07:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.477 1+0 records in 00:11:10.477 1+0 records out 00:11:10.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824173 s, 5.0 MB/s 00:11:10.477 04:07:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.477 04:07:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:10.477 04:07:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.477 04:07:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:10.477 04:07:12 -- common/autotest_common.sh@887 -- # return 0 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:10.477 04:07:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:10.737 04:07:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:10.737 04:07:12 -- common/autotest_common.sh@867 -- # local i 00:11:10.737 04:07:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:10.737 04:07:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:10.737 04:07:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:10.737 04:07:12 -- common/autotest_common.sh@871 -- # break 00:11:10.737 04:07:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:10.737 04:07:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:10.737 04:07:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.737 1+0 records in 00:11:10.737 1+0 records out 00:11:10.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106259 s, 3.9 MB/s 00:11:10.737 04:07:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.737 04:07:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:10.737 04:07:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.737 04:07:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:10.737 04:07:12 -- common/autotest_common.sh@887 -- # return 0 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:10.737 04:07:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:10.996 04:07:12 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:10.996 04:07:12 -- common/autotest_common.sh@867 -- # local i 00:11:10.996 04:07:12 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:10.996 04:07:12 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:10.996 04:07:12 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:10.996 04:07:12 -- common/autotest_common.sh@871 -- # break 00:11:10.996 04:07:12 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:10.996 04:07:12 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:10.996 04:07:12 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.996 1+0 records in 00:11:10.996 1+0 records out 00:11:10.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470348 s, 8.7 MB/s 00:11:10.996 04:07:12 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.996 04:07:12 -- common/autotest_common.sh@884 -- # size=4096 00:11:10.996 04:07:12 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.996 04:07:12 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:10.996 04:07:12 -- common/autotest_common.sh@887 -- # return 0 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd0", 00:11:10.996 "bdev_name": "nvme0n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd1", 00:11:10.996 "bdev_name": "nvme1n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd2", 00:11:10.996 "bdev_name": "nvme1n2" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd3", 00:11:10.996 "bdev_name": "nvme1n3" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd4", 00:11:10.996 "bdev_name": "nvme2n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd5", 00:11:10.996 "bdev_name": "nvme3n1" 00:11:10.996 } 00:11:10.996 ]' 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:10.996 04:07:12 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd0", 00:11:10.996 "bdev_name": "nvme0n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd1", 00:11:10.996 "bdev_name": "nvme1n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd2", 00:11:10.996 "bdev_name": "nvme1n2" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd3", 00:11:10.996 "bdev_name": "nvme1n3" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd4", 00:11:10.996 "bdev_name": "nvme2n1" 00:11:10.996 }, 00:11:10.996 { 00:11:10.996 "nbd_device": "/dev/nbd5", 00:11:10.996 "bdev_name": "nvme3n1" 00:11:10.996 } 00:11:10.996 ]' 00:11:11.253 04:07:12 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:11.253 04:07:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.253 04:07:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@51 -- # local i 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@41 -- # break 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.254 04:07:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@41 -- # break 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.512 04:07:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@41 -- # break 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.773 04:07:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@41 -- # break 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.036 04:07:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@41 -- # break 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.297 04:07:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:12.555 04:07:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@41 -- # break 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:12.556 04:07:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@65 -- # true 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@65 -- # count=0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@122 -- # count=0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@127 -- # return 0 00:11:12.816 04:07:14 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@12 -- # local i 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:11:12.816 /dev/nbd0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.816 04:07:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:12.816 04:07:14 -- common/autotest_common.sh@867 -- # local i 00:11:12.816 04:07:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:12.816 04:07:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:12.816 04:07:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:12.816 04:07:14 -- common/autotest_common.sh@871 -- # break 00:11:12.816 04:07:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:12.816 04:07:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:12.816 04:07:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.816 1+0 records in 00:11:12.816 1+0 records out 00:11:12.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506641 s, 8.1 MB/s 00:11:12.816 04:07:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.816 04:07:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:12.816 04:07:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.816 04:07:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:12.816 04:07:14 -- common/autotest_common.sh@887 -- # return 0 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:12.816 04:07:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:11:13.076 /dev/nbd1 00:11:13.076 04:07:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:13.076 04:07:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:13.076 04:07:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:13.076 04:07:14 -- common/autotest_common.sh@867 -- # local i 00:11:13.076 04:07:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.076 04:07:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.076 04:07:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:13.076 04:07:14 -- common/autotest_common.sh@871 -- # break 00:11:13.076 04:07:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.076 04:07:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.076 04:07:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.076 1+0 records in 00:11:13.076 1+0 records out 00:11:13.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716486 s, 5.7 MB/s 00:11:13.076 04:07:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.076 04:07:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:13.076 04:07:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.076 04:07:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.076 04:07:14 -- common/autotest_common.sh@887 -- # return 0 00:11:13.076 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.076 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:13.076 04:07:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:11:13.343 /dev/nbd10 00:11:13.343 04:07:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:13.343 04:07:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:13.343 04:07:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:13.343 04:07:14 -- common/autotest_common.sh@867 -- # local i 00:11:13.343 04:07:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.343 04:07:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.343 04:07:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:13.343 04:07:14 -- common/autotest_common.sh@871 -- # break 00:11:13.343 04:07:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.343 04:07:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.343 04:07:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.343 1+0 records in 00:11:13.343 1+0 records out 00:11:13.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683181 s, 6.0 MB/s 00:11:13.343 04:07:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.343 04:07:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:13.343 04:07:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.343 04:07:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.343 04:07:14 -- common/autotest_common.sh@887 -- # return 0 00:11:13.343 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.343 04:07:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:13.343 04:07:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:11:13.629 /dev/nbd11 00:11:13.629 04:07:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:13.629 04:07:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:13.629 04:07:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:13.629 04:07:15 -- common/autotest_common.sh@867 -- # local i 00:11:13.629 04:07:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.629 04:07:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.629 04:07:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:13.629 04:07:15 -- common/autotest_common.sh@871 -- # break 00:11:13.629 04:07:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.629 04:07:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.629 04:07:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.629 1+0 records in 00:11:13.629 1+0 records out 00:11:13.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965126 s, 4.2 MB/s 00:11:13.629 04:07:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.629 04:07:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:13.629 04:07:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.629 04:07:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.629 04:07:15 -- common/autotest_common.sh@887 -- # return 0 00:11:13.629 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.629 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:13.629 04:07:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:11:13.629 /dev/nbd12 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:13.892 04:07:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:13.892 04:07:15 -- common/autotest_common.sh@867 -- # local i 00:11:13.892 04:07:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:13.892 04:07:15 -- common/autotest_common.sh@871 -- # break 00:11:13.892 04:07:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.892 1+0 records in 00:11:13.892 1+0 records out 00:11:13.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565927 s, 7.2 MB/s 00:11:13.892 04:07:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.892 04:07:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:13.892 04:07:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.892 04:07:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.892 04:07:15 -- common/autotest_common.sh@887 -- # return 0 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:11:13.892 /dev/nbd13 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:13.892 04:07:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:13.892 04:07:15 -- common/autotest_common.sh@867 -- # local i 00:11:13.892 04:07:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:13.892 04:07:15 -- common/autotest_common.sh@871 -- # break 00:11:13.892 04:07:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:13.892 04:07:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.892 1+0 records in 00:11:13.892 1+0 records out 00:11:13.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696516 s, 5.9 MB/s 00:11:13.892 04:07:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.892 04:07:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:13.892 04:07:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.892 04:07:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:13.892 04:07:15 -- common/autotest_common.sh@887 -- # return 0 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.892 04:07:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.154 04:07:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd0", 00:11:14.154 "bdev_name": "nvme0n1" 00:11:14.154 }, 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd1", 00:11:14.154 "bdev_name": "nvme1n1" 00:11:14.154 }, 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd10", 00:11:14.154 "bdev_name": "nvme1n2" 00:11:14.154 }, 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd11", 00:11:14.154 "bdev_name": "nvme1n3" 00:11:14.154 }, 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd12", 00:11:14.154 "bdev_name": "nvme2n1" 00:11:14.154 }, 00:11:14.154 { 00:11:14.154 "nbd_device": "/dev/nbd13", 00:11:14.154 "bdev_name": "nvme3n1" 00:11:14.154 } 00:11:14.154 ]' 00:11:14.154 04:07:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:14.155 04:07:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd0", 00:11:14.155 "bdev_name": "nvme0n1" 00:11:14.155 }, 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd1", 00:11:14.155 "bdev_name": "nvme1n1" 00:11:14.155 }, 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd10", 00:11:14.155 "bdev_name": "nvme1n2" 00:11:14.155 }, 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd11", 00:11:14.155 "bdev_name": "nvme1n3" 00:11:14.155 }, 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd12", 00:11:14.155 "bdev_name": "nvme2n1" 00:11:14.155 }, 00:11:14.155 { 00:11:14.155 "nbd_device": "/dev/nbd13", 00:11:14.155 "bdev_name": "nvme3n1" 00:11:14.155 } 00:11:14.155 ]' 00:11:14.155 04:07:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:14.155 /dev/nbd1 00:11:14.155 /dev/nbd10 00:11:14.155 /dev/nbd11 00:11:14.155 /dev/nbd12 00:11:14.155 /dev/nbd13' 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:14.417 /dev/nbd1 00:11:14.417 /dev/nbd10 00:11:14.417 /dev/nbd11 00:11:14.417 /dev/nbd12 00:11:14.417 /dev/nbd13' 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@65 -- # count=6 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@66 -- # echo 6 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@95 -- # count=6 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:14.417 256+0 records in 00:11:14.417 256+0 records out 00:11:14.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918531 s, 114 MB/s 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.417 04:07:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:14.417 256+0 records in 00:11:14.417 256+0 records out 00:11:14.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222802 s, 4.7 MB/s 00:11:14.417 04:07:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.417 04:07:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:14.680 256+0 records in 00:11:14.680 256+0 records out 00:11:14.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222906 s, 4.7 MB/s 00:11:14.680 04:07:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.680 04:07:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:14.941 256+0 records in 00:11:14.941 256+0 records out 00:11:14.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218112 s, 4.8 MB/s 00:11:14.941 04:07:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.941 04:07:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:15.203 256+0 records in 00:11:15.203 256+0 records out 00:11:15.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193344 s, 5.4 MB/s 00:11:15.203 04:07:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:15.203 04:07:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:15.463 256+0 records in 00:11:15.463 256+0 records out 00:11:15.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.284938 s, 3.7 MB/s 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:15.463 256+0 records in 00:11:15.463 256+0 records out 00:11:15.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119371 s, 8.8 MB/s 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.463 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@51 -- # local i 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@41 -- # break 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.724 04:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:15.984 04:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.984 04:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@41 -- # break 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.985 04:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@41 -- # break 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.246 04:07:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@41 -- # break 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.505 04:07:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@41 -- # break 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.763 04:07:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.764 04:07:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@41 -- # break 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@65 -- # true 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@65 -- # count=0 00:11:17.022 04:07:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@104 -- # count=0 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@109 -- # return 0 00:11:17.281 04:07:18 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:17.281 malloc_lvol_verify 00:11:17.281 04:07:18 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:17.540 0c3ac0ef-968e-4518-b8d1-790c33f85c35 00:11:17.540 04:07:19 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:17.800 d4af57bc-72f5-4506-bcd3-3344f26fa08a 00:11:17.800 04:07:19 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:17.800 /dev/nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:18.058 mke2fs 1.47.0 (5-Feb-2023) 00:11:18.058 Discarding device blocks: 0/4096 done 00:11:18.058 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:18.058 00:11:18.058 Allocating group tables: 0/1 done 00:11:18.058 Writing inode tables: 0/1 done 00:11:18.058 Creating journal (1024 blocks): done 00:11:18.058 Writing superblocks and filesystem accounting information: 0/1 done 00:11:18.058 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@51 -- # local i 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@41 -- # break 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:18.058 04:07:19 -- bdev/nbd_common.sh@147 -- # return 0 00:11:18.058 04:07:19 -- bdev/blockdev.sh@324 -- # killprocess 79117 00:11:18.058 04:07:19 -- common/autotest_common.sh@936 -- # '[' -z 79117 ']' 00:11:18.058 04:07:19 -- common/autotest_common.sh@940 -- # kill -0 79117 00:11:18.058 04:07:19 -- common/autotest_common.sh@941 -- # uname 00:11:18.058 04:07:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:18.058 04:07:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79117 00:11:18.058 04:07:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:18.058 04:07:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:18.058 killing process with pid 79117 00:11:18.058 04:07:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79117' 00:11:18.058 04:07:19 -- common/autotest_common.sh@955 -- # kill 79117 00:11:18.058 04:07:19 -- common/autotest_common.sh@960 -- # wait 79117 00:11:18.319 04:07:19 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:18.319 00:11:18.319 real 0m9.684s 00:11:18.319 user 0m13.299s 00:11:18.319 sys 0m3.481s 00:11:18.319 04:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.319 04:07:19 -- common/autotest_common.sh@10 -- # set +x 00:11:18.319 ************************************ 00:11:18.319 END TEST bdev_nbd 00:11:18.319 ************************************ 00:11:18.319 04:07:20 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:18.319 04:07:20 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.319 04:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:18.319 ************************************ 00:11:18.319 START TEST bdev_fio 00:11:18.319 ************************************ 00:11:18.319 04:07:20 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@329 -- # local env_context 00:11:18.319 04:07:20 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:18.319 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:18.319 04:07:20 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:18.319 04:07:20 -- bdev/blockdev.sh@337 -- # echo '' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:18.319 04:07:20 -- bdev/blockdev.sh@337 -- # env_context= 00:11:18.319 04:07:20 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:18.319 04:07:20 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:18.319 04:07:20 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:18.319 04:07:20 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:18.319 04:07:20 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:18.319 04:07:20 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:18.319 04:07:20 -- common/autotest_common.sh@1290 -- # cat 00:11:18.319 04:07:20 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1303 -- # cat 00:11:18.319 04:07:20 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:18.319 04:07:20 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:18.319 04:07:20 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:18.319 04:07:20 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:11:18.319 04:07:20 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:18.319 04:07:20 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:11:18.319 04:07:20 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:11:18.319 04:07:20 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:18.320 04:07:20 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:18.320 04:07:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:18.320 04:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.320 04:07:20 -- common/autotest_common.sh@10 -- # set +x 00:11:18.580 ************************************ 00:11:18.580 START TEST bdev_fio_rw_verify 00:11:18.580 ************************************ 00:11:18.581 04:07:20 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:18.581 04:07:20 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:18.581 04:07:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:18.581 04:07:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:18.581 04:07:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:18.581 04:07:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:18.581 04:07:20 -- common/autotest_common.sh@1330 -- # shift 00:11:18.581 04:07:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:18.581 04:07:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:18.581 04:07:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:18.581 04:07:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:18.581 04:07:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:18.581 04:07:20 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:18.581 04:07:20 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:18.581 04:07:20 -- common/autotest_common.sh@1336 -- # break 00:11:18.581 04:07:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:18.581 04:07:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:18.581 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:18.581 fio-3.35 00:11:18.581 Starting 6 threads 00:11:30.785 00:11:30.785 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=79510: Tue Nov 26 04:07:30 2024 00:11:30.785 read: IOPS=39.7k, BW=155MiB/s (163MB/s)(1553MiB/10003msec) 00:11:30.785 slat (usec): min=2, max=2550, avg= 4.45, stdev= 7.38 00:11:30.785 clat (usec): min=44, max=9690, avg=406.79, stdev=301.30 00:11:30.785 lat (usec): min=49, max=9697, avg=411.24, stdev=301.85 00:11:30.785 clat percentiles (usec): 00:11:30.785 | 50.000th=[ 351], 99.000th=[ 1663], 99.900th=[ 3425], 99.990th=[ 4621], 00:11:30.785 | 99.999th=[ 9634] 00:11:30.785 write: IOPS=40.1k, BW=157MiB/s (164MB/s)(1569MiB/10003msec); 0 zone resets 00:11:30.785 slat (usec): min=9, max=3310, avg=22.90, stdev=45.65 00:11:30.785 clat (usec): min=59, max=8057, avg=584.85, stdev=414.26 00:11:30.785 lat (usec): min=80, max=8075, avg=607.75, stdev=419.34 00:11:30.785 clat percentiles (usec): 00:11:30.785 | 50.000th=[ 498], 99.000th=[ 2474], 99.900th=[ 3752], 99.990th=[ 4948], 00:11:30.785 | 99.999th=[ 7439] 00:11:30.785 bw ( KiB/s): min=86278, max=196096, per=99.73%, avg=160132.37, stdev=5593.04, samples=114 00:11:30.785 iops : min=21568, max=49026, avg=40032.58, stdev=1398.33, samples=114 00:11:30.785 lat (usec) : 50=0.01%, 100=0.11%, 250=17.51%, 500=46.29%, 750=24.28% 00:11:30.785 lat (usec) : 1000=6.34% 00:11:30.785 lat (msec) : 2=4.12%, 4=1.30%, 10=0.04% 00:11:30.785 cpu : usr=51.89%, sys=28.50%, ctx=10644, majf=0, minf=33943 00:11:30.786 IO depths : 1=11.3%, 2=23.4%, 4=51.1%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:30.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.786 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.786 issued rwts: total=397535,401545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:30.786 00:11:30.786 Run status group 0 (all jobs): 00:11:30.786 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=1553MiB (1628MB), run=10003-10003msec 00:11:30.786 WRITE: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=1569MiB (1645MB), run=10003-10003msec 00:11:30.786 ----------------------------------------------------- 00:11:30.786 Suppressions used: 00:11:30.786 count bytes template 00:11:30.786 6 48 /usr/src/fio/parse.c 00:11:30.786 3714 356544 /usr/src/fio/iolog.c 00:11:30.786 1 8 libtcmalloc_minimal.so 00:11:30.786 1 904 libcrypto.so 00:11:30.786 ----------------------------------------------------- 00:11:30.786 00:11:30.786 00:11:30.786 real 0m11.030s 00:11:30.786 user 0m31.869s 00:11:30.786 sys 0m17.398s 00:11:30.786 04:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.786 04:07:31 -- common/autotest_common.sh@10 -- # set +x 00:11:30.786 ************************************ 00:11:30.786 END TEST bdev_fio_rw_verify 00:11:30.786 ************************************ 00:11:30.786 04:07:31 -- bdev/blockdev.sh@348 -- # rm -f 00:11:30.786 04:07:31 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:30.786 04:07:31 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:30.786 04:07:31 -- common/autotest_common.sh@1270 -- # local workload=trim 00:11:30.786 04:07:31 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:11:30.786 04:07:31 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:30.786 04:07:31 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:30.786 04:07:31 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:30.786 04:07:31 -- common/autotest_common.sh@1290 -- # cat 00:11:30.786 04:07:31 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:11:30.786 04:07:31 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:30.786 04:07:31 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6e1d3a69-270f-4d07-9ce6-21a11e956be6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6e1d3a69-270f-4d07-9ce6-21a11e956be6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0ab80e5f-d412-44ec-aaa2-a891fda1fcc9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0ab80e5f-d412-44ec-aaa2-a891fda1fcc9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "851f987c-35c3-4800-bd54-50fc70b332b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "851f987c-35c3-4800-bd54-50fc70b332b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "538df82b-3013-43e3-a33b-2afcc8cb51ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "538df82b-3013-43e3-a33b-2afcc8cb51ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d741db06-c29a-4103-b964-b40d0a9d8e7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d741db06-c29a-4103-b964-b40d0a9d8e7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df36bd35-a8a0-4d00-b646-851f1824e0f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df36bd35-a8a0-4d00-b646-851f1824e0f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:30.786 04:07:31 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:11:30.786 04:07:31 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:30.786 /home/vagrant/spdk_repo/spdk 00:11:30.786 04:07:31 -- bdev/blockdev.sh@360 -- # popd 00:11:30.786 04:07:31 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:11:30.786 04:07:31 -- bdev/blockdev.sh@362 -- # return 0 00:11:30.786 00:11:30.786 real 0m11.181s 00:11:30.786 user 0m31.948s 00:11:30.786 sys 0m17.467s 00:11:30.786 04:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.786 ************************************ 00:11:30.786 END TEST bdev_fio 00:11:30.786 ************************************ 00:11:30.786 04:07:31 -- common/autotest_common.sh@10 -- # set +x 00:11:30.786 04:07:31 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:30.786 04:07:31 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:30.786 04:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.786 04:07:31 -- common/autotest_common.sh@10 -- # set +x 00:11:30.786 ************************************ 00:11:30.786 START TEST bdev_verify 00:11:30.786 ************************************ 00:11:30.786 04:07:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:30.786 [2024-11-26 04:07:31.292055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.786 [2024-11-26 04:07:31.292183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79682 ] 00:11:30.786 [2024-11-26 04:07:31.440887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:30.786 [2024-11-26 04:07:31.472744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.786 [2024-11-26 04:07:31.472779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.786 Running I/O for 5 seconds... 00:11:36.048 00:11:36.048 Latency(us) 00:11:36.048 [2024-11-26T04:07:37.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0x20000 00:11:36.048 nvme0n1 : 5.06 2746.98 10.73 0.00 0.00 46403.20 13712.15 70577.23 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x20000 length 0x20000 00:11:36.048 nvme0n1 : 5.07 2751.05 10.75 0.00 0.00 46304.64 19660.80 68560.74 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0x80000 00:11:36.048 nvme1n1 : 5.06 2887.12 11.28 0.00 0.00 44168.36 12905.55 64527.75 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x80000 length 0x80000 00:11:36.048 nvme1n1 : 5.07 2804.02 10.95 0.00 0.00 45402.71 4839.58 66947.54 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0x80000 00:11:36.048 nvme1n2 : 5.07 2855.79 11.16 0.00 0.00 44554.80 3428.04 65334.35 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x80000 length 0x80000 00:11:36.048 nvme1n2 : 5.07 2658.17 10.38 0.00 0.00 47847.42 2646.65 68560.74 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0x80000 00:11:36.048 nvme1n3 : 5.06 2796.80 10.93 0.00 0.00 45491.78 3302.01 66544.25 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x80000 length 0x80000 00:11:36.048 nvme1n3 : 5.06 2770.97 10.82 0.00 0.00 45897.55 3302.01 62511.26 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0xbd0bd 00:11:36.048 nvme2n1 : 5.06 2882.20 11.26 0.00 0.00 44096.70 6604.01 59688.17 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:36.048 nvme2n1 : 5.07 2888.53 11.28 0.00 0.00 43960.01 5293.29 53638.70 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0x0 length 0xa0000 00:11:36.048 nvme3n1 : 5.07 2915.86 11.39 0.00 0.00 43482.70 4537.11 56058.49 00:11:36.048 [2024-11-26T04:07:37.816Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:36.048 Verification LBA range: start 0xa0000 length 0xa0000 00:11:36.048 nvme3n1 : 5.07 2847.75 11.12 0.00 0.00 44516.74 4259.84 61301.37 00:11:36.048 [2024-11-26T04:07:37.816Z] =================================================================================================================== 00:11:36.048 [2024-11-26T04:07:37.816Z] Total : 33805.24 132.05 0.00 0.00 45145.74 2646.65 70577.23 00:11:36.048 00:11:36.048 real 0m5.674s 00:11:36.048 user 0m6.868s 00:11:36.048 sys 0m3.271s 00:11:36.048 ************************************ 00:11:36.048 END TEST bdev_verify 00:11:36.048 ************************************ 00:11:36.048 04:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:36.048 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:36.048 04:07:36 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:36.048 04:07:36 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:36.048 04:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.048 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:11:36.048 ************************************ 00:11:36.048 START TEST bdev_verify_big_io 00:11:36.048 ************************************ 00:11:36.048 04:07:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:36.048 [2024-11-26 04:07:36.991696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:36.048 [2024-11-26 04:07:36.991796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79764 ] 00:11:36.048 [2024-11-26 04:07:37.133967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.048 [2024-11-26 04:07:37.173709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.048 [2024-11-26 04:07:37.173743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.048 Running I/O for 5 seconds... 00:11:41.345 00:11:41.345 Latency(us) 00:11:41.345 [2024-11-26T04:07:43.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0x2000 00:11:41.345 nvme0n1 : 5.62 187.22 11.70 0.00 0.00 668877.97 122602.73 664635.86 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x2000 length 0x2000 00:11:41.345 nvme0n1 : 5.52 284.07 17.75 0.00 0.00 437970.68 101227.91 619466.44 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0x8000 00:11:41.345 nvme1n1 : 5.63 200.53 12.53 0.00 0.00 613799.90 123409.33 629145.60 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x8000 length 0x8000 00:11:41.345 nvme1n1 : 5.52 252.01 15.75 0.00 0.00 486017.56 68157.44 577523.40 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0x8000 00:11:41.345 nvme1n2 : 5.63 187.32 11.71 0.00 0.00 647756.96 119376.34 719484.46 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x8000 length 0x8000 00:11:41.345 nvme1n2 : 5.63 263.51 16.47 0.00 0.00 452944.60 44766.13 506542.87 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0x8000 00:11:41.345 nvme1n3 : 5.64 170.68 10.67 0.00 0.00 698138.56 66947.54 761427.50 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x8000 length 0x8000 00:11:41.345 nvme1n3 : 5.62 262.86 16.43 0.00 0.00 448085.53 42547.99 567844.23 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0xbd0b 00:11:41.345 nvme2n1 : 5.65 215.00 13.44 0.00 0.00 546818.12 4310.25 683994.19 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:41.345 nvme2n1 : 5.62 352.94 22.06 0.00 0.00 327874.44 30650.68 490410.93 00:11:41.345 [2024-11-26T04:07:43.113Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:41.345 Verification LBA range: start 0x0 length 0xa000 00:11:41.346 nvme3n1 : 5.65 186.42 11.65 0.00 0.00 619653.17 3604.48 845313.58 00:11:41.346 [2024-11-26T04:07:43.114Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:41.346 Verification LBA range: start 0xa000 length 0xa000 00:11:41.346 nvme3n1 : 5.64 294.31 18.39 0.00 0.00 389400.27 2016.49 483958.15 00:11:41.346 [2024-11-26T04:07:43.114Z] =================================================================================================================== 00:11:41.346 [2024-11-26T04:07:43.114Z] Total : 2856.87 178.55 0.00 0.00 502948.84 2016.49 845313.58 00:11:41.605 00:11:41.605 real 0m6.286s 00:11:41.605 user 0m11.541s 00:11:41.605 sys 0m0.448s 00:11:41.605 04:07:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.605 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:41.605 ************************************ 00:11:41.605 END TEST bdev_verify_big_io 00:11:41.605 ************************************ 00:11:41.605 04:07:43 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.605 04:07:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:41.605 04:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.605 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:11:41.605 ************************************ 00:11:41.605 START TEST bdev_write_zeroes 00:11:41.605 ************************************ 00:11:41.605 04:07:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.605 [2024-11-26 04:07:43.334680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:41.605 [2024-11-26 04:07:43.334817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79863 ] 00:11:41.863 [2024-11-26 04:07:43.485340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.863 [2024-11-26 04:07:43.516134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.122 Running I/O for 1 seconds... 00:11:43.056 00:11:43.056 Latency(us) 00:11:43.056 [2024-11-26T04:07:44.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme0n1 : 1.01 11779.39 46.01 0.00 0.00 10855.79 8368.44 12905.55 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme1n1 : 1.02 10800.29 42.19 0.00 0.00 11833.93 8318.03 26012.75 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme1n2 : 1.01 11763.58 45.95 0.00 0.00 10855.29 8116.38 13006.38 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme1n3 : 1.01 11749.74 45.90 0.00 0.00 10861.16 8065.97 12905.55 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme2n1 : 1.02 22753.20 88.88 0.00 0.00 5601.32 3730.51 11292.36 00:11:43.056 [2024-11-26T04:07:44.824Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:43.056 nvme3n1 : 1.02 11693.09 45.68 0.00 0.00 10847.76 7763.50 13208.02 00:11:43.056 [2024-11-26T04:07:44.824Z] =================================================================================================================== 00:11:43.056 [2024-11-26T04:07:44.824Z] Total : 80539.28 314.61 0.00 0.00 9500.14 3730.51 26012.75 00:11:43.315 00:11:43.315 real 0m1.605s 00:11:43.315 user 0m0.861s 00:11:43.315 sys 0m0.588s 00:11:43.315 04:07:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.315 04:07:44 -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 ************************************ 00:11:43.315 END TEST bdev_write_zeroes 00:11:43.315 ************************************ 00:11:43.315 04:07:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:43.315 04:07:44 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:43.315 04:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.315 04:07:44 -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 ************************************ 00:11:43.315 START TEST bdev_json_nonenclosed 00:11:43.315 ************************************ 00:11:43.315 04:07:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:43.315 [2024-11-26 04:07:44.971686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.315 [2024-11-26 04:07:44.971803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79902 ] 00:11:43.573 [2024-11-26 04:07:45.120952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.573 [2024-11-26 04:07:45.152060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.573 [2024-11-26 04:07:45.152218] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:43.573 [2024-11-26 04:07:45.152235] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.573 00:11:43.573 real 0m0.318s 00:11:43.573 user 0m0.134s 00:11:43.573 sys 0m0.081s 00:11:43.573 04:07:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.573 04:07:45 -- common/autotest_common.sh@10 -- # set +x 00:11:43.573 ************************************ 00:11:43.573 END TEST bdev_json_nonenclosed 00:11:43.573 ************************************ 00:11:43.573 04:07:45 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:43.573 04:07:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:43.573 04:07:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.573 04:07:45 -- common/autotest_common.sh@10 -- # set +x 00:11:43.573 ************************************ 00:11:43.573 START TEST bdev_json_nonarray 00:11:43.573 ************************************ 00:11:43.573 04:07:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:43.831 [2024-11-26 04:07:45.335609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.831 [2024-11-26 04:07:45.335744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79928 ] 00:11:43.831 [2024-11-26 04:07:45.485622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.831 [2024-11-26 04:07:45.516983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.831 [2024-11-26 04:07:45.517141] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:43.831 [2024-11-26 04:07:45.517160] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:44.089 00:11:44.089 real 0m0.324s 00:11:44.089 user 0m0.121s 00:11:44.089 sys 0m0.100s 00:11:44.089 04:07:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:44.089 04:07:45 -- common/autotest_common.sh@10 -- # set +x 00:11:44.089 ************************************ 00:11:44.089 END TEST bdev_json_nonarray 00:11:44.089 ************************************ 00:11:44.089 04:07:45 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:44.089 04:07:45 -- bdev/blockdev.sh@809 -- # cleanup 00:11:44.089 04:07:45 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:44.089 04:07:45 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:44.089 04:07:45 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:11:44.089 04:07:45 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.564 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.564 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.939 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.939 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.939 00:12:12.939 real 1m13.402s 00:12:12.939 user 1m16.928s 00:12:12.939 sys 1m3.403s 00:12:12.939 04:08:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:12.939 04:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:12.939 ************************************ 00:12:12.939 END TEST blockdev_xnvme 00:12:12.939 ************************************ 00:12:12.939 04:08:14 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:12.939 04:08:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:12.939 04:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.939 04:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:12.939 ************************************ 00:12:12.939 START TEST ublk 00:12:12.939 ************************************ 00:12:12.939 04:08:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:12.939 * Looking for test storage... 00:12:12.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:12.939 04:08:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:12.939 04:08:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:12.939 04:08:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:13.199 04:08:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:13.199 04:08:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:13.199 04:08:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:13.199 04:08:14 -- scripts/common.sh@335 -- # IFS=.-: 00:12:13.199 04:08:14 -- scripts/common.sh@335 -- # read -ra ver1 00:12:13.199 04:08:14 -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.199 04:08:14 -- scripts/common.sh@336 -- # read -ra ver2 00:12:13.199 04:08:14 -- scripts/common.sh@337 -- # local 'op=<' 00:12:13.199 04:08:14 -- scripts/common.sh@339 -- # ver1_l=2 00:12:13.199 04:08:14 -- scripts/common.sh@340 -- # ver2_l=1 00:12:13.199 04:08:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:13.199 04:08:14 -- scripts/common.sh@343 -- # case "$op" in 00:12:13.199 04:08:14 -- scripts/common.sh@344 -- # : 1 00:12:13.199 04:08:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:13.199 04:08:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.199 04:08:14 -- scripts/common.sh@364 -- # decimal 1 00:12:13.199 04:08:14 -- scripts/common.sh@352 -- # local d=1 00:12:13.199 04:08:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.199 04:08:14 -- scripts/common.sh@354 -- # echo 1 00:12:13.199 04:08:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:13.199 04:08:14 -- scripts/common.sh@365 -- # decimal 2 00:12:13.199 04:08:14 -- scripts/common.sh@352 -- # local d=2 00:12:13.199 04:08:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.199 04:08:14 -- scripts/common.sh@354 -- # echo 2 00:12:13.199 04:08:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:13.199 04:08:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:13.199 04:08:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:13.199 04:08:14 -- scripts/common.sh@367 -- # return 0 00:12:13.199 04:08:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:13.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.199 --rc genhtml_branch_coverage=1 00:12:13.199 --rc genhtml_function_coverage=1 00:12:13.199 --rc genhtml_legend=1 00:12:13.199 --rc geninfo_all_blocks=1 00:12:13.199 --rc geninfo_unexecuted_blocks=1 00:12:13.199 00:12:13.199 ' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:13.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.199 --rc genhtml_branch_coverage=1 00:12:13.199 --rc genhtml_function_coverage=1 00:12:13.199 --rc genhtml_legend=1 00:12:13.199 --rc geninfo_all_blocks=1 00:12:13.199 --rc geninfo_unexecuted_blocks=1 00:12:13.199 00:12:13.199 ' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:13.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.199 --rc genhtml_branch_coverage=1 00:12:13.199 --rc genhtml_function_coverage=1 00:12:13.199 --rc genhtml_legend=1 00:12:13.199 --rc geninfo_all_blocks=1 00:12:13.199 --rc geninfo_unexecuted_blocks=1 00:12:13.199 00:12:13.199 ' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:13.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.199 --rc genhtml_branch_coverage=1 00:12:13.199 --rc genhtml_function_coverage=1 00:12:13.199 --rc genhtml_legend=1 00:12:13.199 --rc geninfo_all_blocks=1 00:12:13.199 --rc geninfo_unexecuted_blocks=1 00:12:13.199 00:12:13.199 ' 00:12:13.199 04:08:14 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:13.199 04:08:14 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:13.199 04:08:14 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:13.199 04:08:14 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:13.199 04:08:14 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:13.199 04:08:14 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:13.199 04:08:14 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:13.199 04:08:14 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:13.199 04:08:14 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:13.199 04:08:14 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:13.199 04:08:14 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:13.199 04:08:14 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:13.199 04:08:14 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:13.199 04:08:14 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:13.199 04:08:14 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:13.199 04:08:14 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:13.199 04:08:14 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:13.199 04:08:14 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:13.199 04:08:14 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:13.199 04:08:14 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:13.199 04:08:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:13.199 04:08:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.199 04:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.199 ************************************ 00:12:13.199 START TEST test_save_ublk_config 00:12:13.199 ************************************ 00:12:13.199 04:08:14 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:13.199 04:08:14 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:13.199 04:08:14 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:13.199 04:08:14 -- ublk/ublk.sh@103 -- # tgtpid=80482 00:12:13.199 04:08:14 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:13.199 04:08:14 -- ublk/ublk.sh@106 -- # waitforlisten 80482 00:12:13.199 04:08:14 -- common/autotest_common.sh@829 -- # '[' -z 80482 ']' 00:12:13.199 04:08:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.199 04:08:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.199 04:08:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.199 04:08:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.199 04:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.199 [2024-11-26 04:08:14.826322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:13.199 [2024-11-26 04:08:14.826426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80482 ] 00:12:13.458 [2024-11-26 04:08:14.970780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.458 [2024-11-26 04:08:15.009886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:13.458 [2024-11-26 04:08:15.010080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.025 04:08:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.025 04:08:15 -- common/autotest_common.sh@862 -- # return 0 00:12:14.025 04:08:15 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:14.025 04:08:15 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:14.025 04:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.025 04:08:15 -- common/autotest_common.sh@10 -- # set +x 00:12:14.025 [2024-11-26 04:08:15.651730] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:14.025 malloc0 00:12:14.025 [2024-11-26 04:08:15.675632] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:14.025 [2024-11-26 04:08:15.675707] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:14.025 [2024-11-26 04:08:15.675715] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:14.025 [2024-11-26 04:08:15.675723] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:14.025 [2024-11-26 04:08:15.683547] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:14.025 [2024-11-26 04:08:15.683577] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:14.025 [2024-11-26 04:08:15.691532] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:14.025 [2024-11-26 04:08:15.691629] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:14.025 [2024-11-26 04:08:15.708526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:14.025 0 00:12:14.025 04:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.025 04:08:15 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:14.025 04:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.025 04:08:15 -- common/autotest_common.sh@10 -- # set +x 00:12:14.284 04:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.284 04:08:15 -- ublk/ublk.sh@115 -- # config='{ 00:12:14.284 "subsystems": [ 00:12:14.284 { 00:12:14.284 "subsystem": "iobuf", 00:12:14.284 "config": [ 00:12:14.284 { 00:12:14.284 "method": "iobuf_set_options", 00:12:14.284 "params": { 00:12:14.284 "small_pool_count": 8192, 00:12:14.284 "large_pool_count": 1024, 00:12:14.284 "small_bufsize": 8192, 00:12:14.284 "large_bufsize": 135168 00:12:14.284 } 00:12:14.284 } 00:12:14.284 ] 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "subsystem": "sock", 00:12:14.284 "config": [ 00:12:14.284 { 00:12:14.284 "method": "sock_impl_set_options", 00:12:14.284 "params": { 00:12:14.284 "impl_name": "posix", 00:12:14.284 "recv_buf_size": 2097152, 00:12:14.284 "send_buf_size": 2097152, 00:12:14.284 "enable_recv_pipe": true, 00:12:14.284 "enable_quickack": false, 00:12:14.284 "enable_placement_id": 0, 00:12:14.284 "enable_zerocopy_send_server": true, 00:12:14.284 "enable_zerocopy_send_client": false, 00:12:14.284 "zerocopy_threshold": 0, 00:12:14.284 "tls_version": 0, 00:12:14.284 "enable_ktls": false 00:12:14.284 } 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "method": "sock_impl_set_options", 00:12:14.284 "params": { 00:12:14.284 "impl_name": "ssl", 00:12:14.284 "recv_buf_size": 4096, 00:12:14.284 "send_buf_size": 4096, 00:12:14.284 "enable_recv_pipe": true, 00:12:14.284 "enable_quickack": false, 00:12:14.284 "enable_placement_id": 0, 00:12:14.284 "enable_zerocopy_send_server": true, 00:12:14.284 "enable_zerocopy_send_client": false, 00:12:14.284 "zerocopy_threshold": 0, 00:12:14.284 "tls_version": 0, 00:12:14.284 "enable_ktls": false 00:12:14.284 } 00:12:14.284 } 00:12:14.284 ] 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "subsystem": "vmd", 00:12:14.284 "config": [] 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "subsystem": "accel", 00:12:14.284 "config": [ 00:12:14.284 { 00:12:14.284 "method": "accel_set_options", 00:12:14.284 "params": { 00:12:14.284 "small_cache_size": 128, 00:12:14.284 "large_cache_size": 16, 00:12:14.284 "task_count": 2048, 00:12:14.284 "sequence_count": 2048, 00:12:14.284 "buf_count": 2048 00:12:14.284 } 00:12:14.284 } 00:12:14.284 ] 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "subsystem": "bdev", 00:12:14.284 "config": [ 00:12:14.284 { 00:12:14.284 "method": "bdev_set_options", 00:12:14.284 "params": { 00:12:14.284 "bdev_io_pool_size": 65535, 00:12:14.284 "bdev_io_cache_size": 256, 00:12:14.284 "bdev_auto_examine": true, 00:12:14.284 "iobuf_small_cache_size": 128, 00:12:14.284 "iobuf_large_cache_size": 16 00:12:14.284 } 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "method": "bdev_raid_set_options", 00:12:14.284 "params": { 00:12:14.284 "process_window_size_kb": 1024 00:12:14.284 } 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "method": "bdev_iscsi_set_options", 00:12:14.284 "params": { 00:12:14.284 "timeout_sec": 30 00:12:14.284 } 00:12:14.284 }, 00:12:14.284 { 00:12:14.284 "method": "bdev_nvme_set_options", 00:12:14.284 "params": { 00:12:14.284 "action_on_timeout": "none", 00:12:14.284 "timeout_us": 0, 00:12:14.284 "timeout_admin_us": 0, 00:12:14.284 "keep_alive_timeout_ms": 10000, 00:12:14.284 "transport_retry_count": 4, 00:12:14.284 "arbitration_burst": 0, 00:12:14.284 "low_priority_weight": 0, 00:12:14.284 "medium_priority_weight": 0, 00:12:14.284 "high_priority_weight": 0, 00:12:14.284 "nvme_adminq_poll_period_us": 10000, 00:12:14.284 "nvme_ioq_poll_period_us": 0, 00:12:14.284 "io_queue_requests": 0, 00:12:14.285 "delay_cmd_submit": true, 00:12:14.285 "bdev_retry_count": 3, 00:12:14.285 "transport_ack_timeout": 0, 00:12:14.285 "ctrlr_loss_timeout_sec": 0, 00:12:14.285 "reconnect_delay_sec": 0, 00:12:14.285 "fast_io_fail_timeout_sec": 0, 00:12:14.285 "generate_uuids": false, 00:12:14.285 "transport_tos": 0, 00:12:14.285 "io_path_stat": false, 00:12:14.285 "allow_accel_sequence": false 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "bdev_nvme_set_hotplug", 00:12:14.285 "params": { 00:12:14.285 "period_us": 100000, 00:12:14.285 "enable": false 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "bdev_malloc_create", 00:12:14.285 "params": { 00:12:14.285 "name": "malloc0", 00:12:14.285 "num_blocks": 8192, 00:12:14.285 "block_size": 4096, 00:12:14.285 "physical_block_size": 4096, 00:12:14.285 "uuid": "743e6eeb-54d3-42d6-8904-882385fc75c1", 00:12:14.285 "optimal_io_boundary": 0 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "bdev_wait_for_examine" 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "scsi", 00:12:14.285 "config": null 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "scheduler", 00:12:14.285 "config": [ 00:12:14.285 { 00:12:14.285 "method": "framework_set_scheduler", 00:12:14.285 "params": { 00:12:14.285 "name": "static" 00:12:14.285 } 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "vhost_scsi", 00:12:14.285 "config": [] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "vhost_blk", 00:12:14.285 "config": [] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "ublk", 00:12:14.285 "config": [ 00:12:14.285 { 00:12:14.285 "method": "ublk_create_target", 00:12:14.285 "params": { 00:12:14.285 "cpumask": "1" 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "ublk_start_disk", 00:12:14.285 "params": { 00:12:14.285 "bdev_name": "malloc0", 00:12:14.285 "ublk_id": 0, 00:12:14.285 "num_queues": 1, 00:12:14.285 "queue_depth": 128 00:12:14.285 } 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "nbd", 00:12:14.285 "config": [] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "nvmf", 00:12:14.285 "config": [ 00:12:14.285 { 00:12:14.285 "method": "nvmf_set_config", 00:12:14.285 "params": { 00:12:14.285 "discovery_filter": "match_any", 00:12:14.285 "admin_cmd_passthru": { 00:12:14.285 "identify_ctrlr": false 00:12:14.285 } 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "nvmf_set_max_subsystems", 00:12:14.285 "params": { 00:12:14.285 "max_subsystems": 1024 00:12:14.285 } 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "method": "nvmf_set_crdt", 00:12:14.285 "params": { 00:12:14.285 "crdt1": 0, 00:12:14.285 "crdt2": 0, 00:12:14.285 "crdt3": 0 00:12:14.285 } 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 }, 00:12:14.285 { 00:12:14.285 "subsystem": "iscsi", 00:12:14.285 "config": [ 00:12:14.285 { 00:12:14.285 "method": "iscsi_set_options", 00:12:14.285 "params": { 00:12:14.285 "node_base": "iqn.2016-06.io.spdk", 00:12:14.285 "max_sessions": 128, 00:12:14.285 "max_connections_per_session": 2, 00:12:14.285 "max_queue_depth": 64, 00:12:14.285 "default_time2wait": 2, 00:12:14.285 "default_time2retain": 20, 00:12:14.285 "first_burst_length": 8192, 00:12:14.285 "immediate_data": true, 00:12:14.285 "allow_duplicated_isid": false, 00:12:14.285 "error_recovery_level": 0, 00:12:14.285 "nop_timeout": 60, 00:12:14.285 "nop_in_interval": 30, 00:12:14.285 "disable_chap": false, 00:12:14.285 "require_chap": false, 00:12:14.285 "mutual_chap": false, 00:12:14.285 "chap_group": 0, 00:12:14.285 "max_large_datain_per_connection": 64, 00:12:14.285 "max_r2t_per_connection": 4, 00:12:14.285 "pdu_pool_size": 36864, 00:12:14.285 "immediate_data_pool_size": 16384, 00:12:14.285 "data_out_pool_size": 2048 00:12:14.285 } 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 } 00:12:14.285 ] 00:12:14.285 }' 00:12:14.285 04:08:15 -- ublk/ublk.sh@116 -- # killprocess 80482 00:12:14.285 04:08:15 -- common/autotest_common.sh@936 -- # '[' -z 80482 ']' 00:12:14.285 04:08:15 -- common/autotest_common.sh@940 -- # kill -0 80482 00:12:14.285 04:08:15 -- common/autotest_common.sh@941 -- # uname 00:12:14.285 04:08:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:14.285 04:08:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80482 00:12:14.285 04:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:14.285 04:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:14.285 killing process with pid 80482 00:12:14.285 04:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80482' 00:12:14.285 04:08:15 -- common/autotest_common.sh@955 -- # kill 80482 00:12:14.285 04:08:15 -- common/autotest_common.sh@960 -- # wait 80482 00:12:14.544 [2024-11-26 04:08:16.159616] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:14.544 [2024-11-26 04:08:16.193598] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:14.544 [2024-11-26 04:08:16.193718] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:14.544 [2024-11-26 04:08:16.199525] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:14.544 [2024-11-26 04:08:16.199577] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:14.544 [2024-11-26 04:08:16.199588] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:14.544 [2024-11-26 04:08:16.199614] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:14.544 [2024-11-26 04:08:16.199751] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:14.803 04:08:16 -- ublk/ublk.sh@119 -- # tgtpid=80509 00:12:14.803 04:08:16 -- ublk/ublk.sh@121 -- # waitforlisten 80509 00:12:14.803 04:08:16 -- common/autotest_common.sh@829 -- # '[' -z 80509 ']' 00:12:14.803 04:08:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.803 04:08:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.803 04:08:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.803 04:08:16 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:14.803 04:08:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.803 04:08:16 -- common/autotest_common.sh@10 -- # set +x 00:12:14.803 04:08:16 -- ublk/ublk.sh@118 -- # echo '{ 00:12:14.803 "subsystems": [ 00:12:14.803 { 00:12:14.803 "subsystem": "iobuf", 00:12:14.803 "config": [ 00:12:14.803 { 00:12:14.803 "method": "iobuf_set_options", 00:12:14.803 "params": { 00:12:14.803 "small_pool_count": 8192, 00:12:14.803 "large_pool_count": 1024, 00:12:14.803 "small_bufsize": 8192, 00:12:14.803 "large_bufsize": 135168 00:12:14.803 } 00:12:14.803 } 00:12:14.803 ] 00:12:14.803 }, 00:12:14.803 { 00:12:14.803 "subsystem": "sock", 00:12:14.803 "config": [ 00:12:14.803 { 00:12:14.803 "method": "sock_impl_set_options", 00:12:14.803 "params": { 00:12:14.803 "impl_name": "posix", 00:12:14.803 "recv_buf_size": 2097152, 00:12:14.803 "send_buf_size": 2097152, 00:12:14.803 "enable_recv_pipe": true, 00:12:14.803 "enable_quickack": false, 00:12:14.803 "enable_placement_id": 0, 00:12:14.803 "enable_zerocopy_send_server": true, 00:12:14.803 "enable_zerocopy_send_client": false, 00:12:14.803 "zerocopy_threshold": 0, 00:12:14.803 "tls_version": 0, 00:12:14.803 "enable_ktls": false 00:12:14.803 } 00:12:14.803 }, 00:12:14.803 { 00:12:14.803 "method": "sock_impl_set_options", 00:12:14.803 "params": { 00:12:14.803 "impl_name": "ssl", 00:12:14.803 "recv_buf_size": 4096, 00:12:14.803 "send_buf_size": 4096, 00:12:14.803 "enable_recv_pipe": true, 00:12:14.803 "enable_quickack": false, 00:12:14.803 "enable_placement_id": 0, 00:12:14.803 "enable_zerocopy_send_server": true, 00:12:14.803 "enable_zerocopy_send_client": false, 00:12:14.803 "zerocopy_threshold": 0, 00:12:14.803 "tls_version": 0, 00:12:14.803 "enable_ktls": false 00:12:14.803 } 00:12:14.803 } 00:12:14.803 ] 00:12:14.803 }, 00:12:14.803 { 00:12:14.803 "subsystem": "vmd", 00:12:14.803 "config": [] 00:12:14.803 }, 00:12:14.803 { 00:12:14.803 "subsystem": "accel", 00:12:14.803 "config": [ 00:12:14.804 { 00:12:14.804 "method": "accel_set_options", 00:12:14.804 "params": { 00:12:14.804 "small_cache_size": 128, 00:12:14.804 "large_cache_size": 16, 00:12:14.804 "task_count": 2048, 00:12:14.804 "sequence_count": 2048, 00:12:14.804 "buf_count": 2048 00:12:14.804 } 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "bdev", 00:12:14.804 "config": [ 00:12:14.804 { 00:12:14.804 "method": "bdev_set_options", 00:12:14.804 "params": { 00:12:14.804 "bdev_io_pool_size": 65535, 00:12:14.804 "bdev_io_cache_size": 256, 00:12:14.804 "bdev_auto_examine": true, 00:12:14.804 "iobuf_small_cache_size": 128, 00:12:14.804 "iobuf_large_cache_size": 16 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_raid_set_options", 00:12:14.804 "params": { 00:12:14.804 "process_window_size_kb": 1024 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_iscsi_set_options", 00:12:14.804 "params": { 00:12:14.804 "timeout_sec": 30 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_nvme_set_options", 00:12:14.804 "params": { 00:12:14.804 "action_on_timeout": "none", 00:12:14.804 "timeout_us": 0, 00:12:14.804 "timeout_admin_us": 0, 00:12:14.804 "keep_alive_timeout_ms": 10000, 00:12:14.804 "transport_retry_count": 4, 00:12:14.804 "arbitration_burst": 0, 00:12:14.804 "low_priority_weight": 0, 00:12:14.804 "medium_priority_weight": 0, 00:12:14.804 "high_priority_weight": 0, 00:12:14.804 "nvme_adminq_poll_period_us": 10000, 00:12:14.804 "nvme_ioq_poll_period_us": 0, 00:12:14.804 "io_queue_requests": 0, 00:12:14.804 "delay_cmd_submit": true, 00:12:14.804 "bdev_retry_count": 3, 00:12:14.804 "transport_ack_timeout": 0, 00:12:14.804 "ctrlr_loss_timeout_sec": 0, 00:12:14.804 "reconnect_delay_sec": 0, 00:12:14.804 "fast_io_fail_timeout_sec": 0, 00:12:14.804 "generate_uuids": false, 00:12:14.804 "transport_tos": 0, 00:12:14.804 "io_path_stat": false, 00:12:14.804 "allow_accel_sequence": false 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_nvme_set_hotplug", 00:12:14.804 "params": { 00:12:14.804 "period_us": 100000, 00:12:14.804 "enable": false 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_malloc_create", 00:12:14.804 "params": { 00:12:14.804 "name": "malloc0", 00:12:14.804 "num_blocks": 8192, 00:12:14.804 "block_size": 4096, 00:12:14.804 "physical_block_size": 4096, 00:12:14.804 "uuid": "743e6eeb-54d3-42d6-8904-882385fc75c1", 00:12:14.804 "optimal_io_boundary": 0 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "bdev_wait_for_examine" 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "scsi", 00:12:14.804 "config": null 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "scheduler", 00:12:14.804 "config": [ 00:12:14.804 { 00:12:14.804 "method": "framework_set_scheduler", 00:12:14.804 "params": { 00:12:14.804 "name": "static" 00:12:14.804 } 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "vhost_scsi", 00:12:14.804 "config": [] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "vhost_blk", 00:12:14.804 "config": [] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "ublk", 00:12:14.804 "config": [ 00:12:14.804 { 00:12:14.804 "method": "ublk_create_target", 00:12:14.804 "params": { 00:12:14.804 "cpumask": "1" 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "ublk_start_disk", 00:12:14.804 "params": { 00:12:14.804 "bdev_name": "malloc0", 00:12:14.804 "ublk_id": 0, 00:12:14.804 "num_queues": 1, 00:12:14.804 "queue_depth": 128 00:12:14.804 } 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "nbd", 00:12:14.804 "config": [] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "nvmf", 00:12:14.804 "config": [ 00:12:14.804 { 00:12:14.804 "method": "nvmf_set_config", 00:12:14.804 "params": { 00:12:14.804 "discovery_filter": "match_any", 00:12:14.804 "admin_cmd_passthru": { 00:12:14.804 "identify_ctrlr": false 00:12:14.804 } 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "nvmf_set_max_subsystems", 00:12:14.804 "params": { 00:12:14.804 "max_subsystems": 1024 00:12:14.804 } 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "method": "nvmf_set_crdt", 00:12:14.804 "params": { 00:12:14.804 "crdt1": 0, 00:12:14.804 "crdt2": 0, 00:12:14.804 "crdt3": 0 00:12:14.804 } 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }, 00:12:14.804 { 00:12:14.804 "subsystem": "iscsi", 00:12:14.804 "config": [ 00:12:14.804 { 00:12:14.804 "method": "iscsi_set_options", 00:12:14.804 "params": { 00:12:14.804 "node_base": "iqn.2016-06.io.spdk", 00:12:14.804 "max_sessions": 128, 00:12:14.804 "max_connections_per_session": 2, 00:12:14.804 "max_queue_depth": 64, 00:12:14.804 "default_time2wait": 2, 00:12:14.804 "default_time2retain": 20, 00:12:14.804 "first_burst_length": 8192, 00:12:14.804 "immediate_data": true, 00:12:14.804 "allow_duplicated_isid": false, 00:12:14.804 "error_recovery_level": 0, 00:12:14.804 "nop_timeout": 60, 00:12:14.804 "nop_in_interval": 30, 00:12:14.804 "disable_chap": false, 00:12:14.804 "require_chap": false, 00:12:14.804 "mutual_chap": false, 00:12:14.804 "chap_group": 0, 00:12:14.804 "max_large_datain_per_connection": 64, 00:12:14.804 "max_r2t_per_connection": 4, 00:12:14.804 "pdu_pool_size": 36864, 00:12:14.804 "immediate_data_pool_size": 16384, 00:12:14.804 "data_out_pool_size": 2048 00:12:14.804 } 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 } 00:12:14.804 ] 00:12:14.804 }' 00:12:14.804 [2024-11-26 04:08:16.560416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:14.804 [2024-11-26 04:08:16.560535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80509 ] 00:12:15.063 [2024-11-26 04:08:16.708625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.063 [2024-11-26 04:08:16.748956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:15.063 [2024-11-26 04:08:16.749139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.322 [2024-11-26 04:08:17.025721] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:15.322 [2024-11-26 04:08:17.033613] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:15.322 [2024-11-26 04:08:17.033688] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:15.322 [2024-11-26 04:08:17.033695] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:15.322 [2024-11-26 04:08:17.033702] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:15.322 [2024-11-26 04:08:17.042590] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:15.322 [2024-11-26 04:08:17.042611] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:15.322 [2024-11-26 04:08:17.049528] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:15.322 [2024-11-26 04:08:17.049615] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:15.322 [2024-11-26 04:08:17.066523] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:15.889 04:08:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.889 04:08:17 -- common/autotest_common.sh@862 -- # return 0 00:12:15.889 04:08:17 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:15.889 04:08:17 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:15.889 04:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.889 04:08:17 -- common/autotest_common.sh@10 -- # set +x 00:12:15.889 04:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.889 04:08:17 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:15.889 04:08:17 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:15.889 04:08:17 -- ublk/ublk.sh@125 -- # killprocess 80509 00:12:15.889 04:08:17 -- common/autotest_common.sh@936 -- # '[' -z 80509 ']' 00:12:15.889 04:08:17 -- common/autotest_common.sh@940 -- # kill -0 80509 00:12:15.889 04:08:17 -- common/autotest_common.sh@941 -- # uname 00:12:15.889 04:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.889 04:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80509 00:12:15.889 killing process with pid 80509 00:12:15.889 04:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.889 04:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.889 04:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80509' 00:12:15.889 04:08:17 -- common/autotest_common.sh@955 -- # kill 80509 00:12:15.889 04:08:17 -- common/autotest_common.sh@960 -- # wait 80509 00:12:15.889 [2024-11-26 04:08:17.610212] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:15.889 [2024-11-26 04:08:17.648600] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:15.889 [2024-11-26 04:08:17.648720] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:16.147 [2024-11-26 04:08:17.656528] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:16.147 [2024-11-26 04:08:17.656578] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:16.147 [2024-11-26 04:08:17.656585] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:16.147 [2024-11-26 04:08:17.656615] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:16.147 [2024-11-26 04:08:17.656754] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:16.405 04:08:17 -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:16.406 00:12:16.406 real 0m3.198s 00:12:16.406 user 0m2.349s 00:12:16.406 sys 0m1.423s 00:12:16.406 04:08:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.406 04:08:17 -- common/autotest_common.sh@10 -- # set +x 00:12:16.406 ************************************ 00:12:16.406 END TEST test_save_ublk_config 00:12:16.406 ************************************ 00:12:16.406 04:08:17 -- ublk/ublk.sh@139 -- # spdk_pid=80565 00:12:16.406 04:08:17 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:16.406 04:08:17 -- ublk/ublk.sh@141 -- # waitforlisten 80565 00:12:16.406 04:08:17 -- common/autotest_common.sh@829 -- # '[' -z 80565 ']' 00:12:16.406 04:08:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.406 04:08:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.406 04:08:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.406 04:08:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.406 04:08:17 -- common/autotest_common.sh@10 -- # set +x 00:12:16.406 04:08:17 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:16.406 [2024-11-26 04:08:18.056879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:16.406 [2024-11-26 04:08:18.056988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80565 ] 00:12:16.664 [2024-11-26 04:08:18.205194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:16.664 [2024-11-26 04:08:18.235871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:16.665 [2024-11-26 04:08:18.236349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.665 [2024-11-26 04:08:18.236449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.233 04:08:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.233 04:08:18 -- common/autotest_common.sh@862 -- # return 0 00:12:17.233 04:08:18 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:17.233 04:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:17.233 04:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.233 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.233 ************************************ 00:12:17.233 START TEST test_create_ublk 00:12:17.233 ************************************ 00:12:17.233 04:08:18 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:12:17.233 04:08:18 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:17.233 04:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.233 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.233 [2024-11-26 04:08:18.886526] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:17.233 04:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.233 04:08:18 -- ublk/ublk.sh@33 -- # ublk_target= 00:12:17.233 04:08:18 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:17.233 04:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.233 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.233 04:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.233 04:08:18 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:17.233 04:08:18 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:17.233 04:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.233 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.233 [2024-11-26 04:08:18.941647] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:17.233 [2024-11-26 04:08:18.942027] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:17.233 [2024-11-26 04:08:18.942041] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:17.233 [2024-11-26 04:08:18.942057] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:17.233 [2024-11-26 04:08:18.950707] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:17.233 [2024-11-26 04:08:18.950733] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:17.233 [2024-11-26 04:08:18.957529] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:17.233 [2024-11-26 04:08:18.969584] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:17.233 [2024-11-26 04:08:18.983613] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:17.233 04:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.233 04:08:18 -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:17.233 04:08:18 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:17.233 04:08:18 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:17.233 04:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.233 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.491 04:08:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:17.491 { 00:12:17.491 "ublk_device": "/dev/ublkb0", 00:12:17.491 "id": 0, 00:12:17.491 "queue_depth": 512, 00:12:17.491 "num_queues": 4, 00:12:17.491 "bdev_name": "Malloc0" 00:12:17.491 } 00:12:17.491 ]' 00:12:17.491 04:08:19 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:17.491 04:08:19 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:17.491 04:08:19 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:17.491 04:08:19 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:17.491 04:08:19 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:17.491 04:08:19 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:17.491 04:08:19 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:17.491 04:08:19 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:17.491 04:08:19 -- lvol/common.sh@41 -- # local offset=0 00:12:17.491 04:08:19 -- lvol/common.sh@42 -- # local size=134217728 00:12:17.491 04:08:19 -- lvol/common.sh@43 -- # local rw=write 00:12:17.491 04:08:19 -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:17.491 04:08:19 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:17.491 04:08:19 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:17.491 04:08:19 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:17.491 04:08:19 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:17.492 04:08:19 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:17.492 04:08:19 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:17.749 fio: verification read phase will never start because write phase uses all of runtime 00:12:17.749 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:17.749 fio-3.35 00:12:17.749 Starting 1 process 00:12:27.719 00:12:27.719 fio_test: (groupid=0, jobs=1): err= 0: pid=80604: Tue Nov 26 04:08:29 2024 00:12:27.719 write: IOPS=15.1k, BW=58.8MiB/s (61.6MB/s)(588MiB/10001msec); 0 zone resets 00:12:27.719 clat (usec): min=42, max=4123, avg=65.62, stdev=95.34 00:12:27.719 lat (usec): min=43, max=4132, avg=66.09, stdev=95.35 00:12:27.719 clat percentiles (usec): 00:12:27.719 | 1.00th=[ 49], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 57], 00:12:27.719 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:12:27.719 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 76], 00:12:27.719 | 99.00th=[ 87], 99.50th=[ 103], 99.90th=[ 1860], 99.95th=[ 2900], 00:12:27.719 | 99.99th=[ 3490] 00:12:27.719 bw ( KiB/s): min=55184, max=63560, per=99.97%, avg=60185.26, stdev=1846.66, samples=19 00:12:27.719 iops : min=13796, max=15890, avg=15046.32, stdev=461.67, samples=19 00:12:27.719 lat (usec) : 50=1.97%, 100=97.50%, 250=0.30%, 500=0.05%, 750=0.02% 00:12:27.719 lat (usec) : 1000=0.01% 00:12:27.719 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:12:27.719 cpu : usr=2.40%, sys=14.39%, ctx=150545, majf=0, minf=796 00:12:27.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.719 issued rwts: total=0,150526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.719 00:12:27.719 Run status group 0 (all jobs): 00:12:27.719 WRITE: bw=58.8MiB/s (61.6MB/s), 58.8MiB/s-58.8MiB/s (61.6MB/s-61.6MB/s), io=588MiB (617MB), run=10001-10001msec 00:12:27.719 00:12:27.719 Disk stats (read/write): 00:12:27.719 ublkb0: ios=0/148956, merge=0/0, ticks=0/8090, in_queue=8091, util=99.10% 00:12:27.719 04:08:29 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:12:27.719 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.719 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.719 [2024-11-26 04:08:29.404054] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:27.719 [2024-11-26 04:08:29.434018] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:27.719 [2024-11-26 04:08:29.434956] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:27.719 [2024-11-26 04:08:29.439524] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:27.720 [2024-11-26 04:08:29.439783] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:27.720 [2024-11-26 04:08:29.439795] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:27.720 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.720 04:08:29 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:12:27.720 04:08:29 -- common/autotest_common.sh@650 -- # local es=0 00:12:27.720 04:08:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:12:27.720 04:08:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:27.720 04:08:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.720 04:08:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:27.720 04:08:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.720 04:08:29 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:12:27.720 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.720 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.720 [2024-11-26 04:08:29.455595] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:12:27.720 request: 00:12:27.720 { 00:12:27.720 "ublk_id": 0, 00:12:27.720 "method": "ublk_stop_disk", 00:12:27.720 "req_id": 1 00:12:27.720 } 00:12:27.720 Got JSON-RPC error response 00:12:27.720 response: 00:12:27.720 { 00:12:27.720 "code": -19, 00:12:27.720 "message": "No such device" 00:12:27.720 } 00:12:27.720 04:08:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:27.720 04:08:29 -- common/autotest_common.sh@653 -- # es=1 00:12:27.720 04:08:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.720 04:08:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.720 04:08:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.720 04:08:29 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:12:27.720 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.720 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.720 [2024-11-26 04:08:29.471578] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:27.720 [2024-11-26 04:08:29.473784] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:27.720 [2024-11-26 04:08:29.473818] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:27.720 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.720 04:08:29 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:27.720 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.720 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.978 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.978 04:08:29 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:12:27.978 04:08:29 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:27.978 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.979 04:08:29 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:27.979 04:08:29 -- lvol/common.sh@26 -- # jq length 00:12:27.979 04:08:29 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:27.979 04:08:29 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:27.979 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.979 04:08:29 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:27.979 04:08:29 -- lvol/common.sh@28 -- # jq length 00:12:27.979 04:08:29 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:27.979 00:12:27.979 real 0m10.752s 00:12:27.979 user 0m0.541s 00:12:27.979 sys 0m1.512s 00:12:27.979 04:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 ************************************ 00:12:27.979 END TEST test_create_ublk 00:12:27.979 ************************************ 00:12:27.979 04:08:29 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:12:27.979 04:08:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:27.979 04:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 ************************************ 00:12:27.979 START TEST test_create_multi_ublk 00:12:27.979 ************************************ 00:12:27.979 04:08:29 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:12:27.979 04:08:29 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:12:27.979 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 [2024-11-26 04:08:29.675371] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:27.979 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.979 04:08:29 -- ublk/ublk.sh@62 -- # ublk_target= 00:12:27.979 04:08:29 -- ublk/ublk.sh@64 -- # seq 0 3 00:12:27.979 04:08:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:27.979 04:08:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:12:27.979 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.979 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:27.979 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.979 04:08:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:12:27.979 04:08:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:28.238 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.238 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:28.238 [2024-11-26 04:08:29.746630] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:28.238 [2024-11-26 04:08:29.746920] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:28.238 [2024-11-26 04:08:29.746931] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:28.238 [2024-11-26 04:08:29.746945] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:28.238 [2024-11-26 04:08:29.770533] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:28.238 [2024-11-26 04:08:29.770552] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:28.238 [2024-11-26 04:08:29.782527] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:28.238 [2024-11-26 04:08:29.783005] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:28.238 [2024-11-26 04:08:29.822526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:28.238 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.238 04:08:29 -- ublk/ublk.sh@68 -- # ublk_id=0 00:12:28.238 04:08:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:28.238 04:08:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:12:28.238 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.238 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:28.238 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.238 04:08:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:12:28.238 04:08:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:12:28.238 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.238 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:28.238 [2024-11-26 04:08:29.906617] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:12:28.238 [2024-11-26 04:08:29.906905] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:12:28.238 [2024-11-26 04:08:29.906918] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:28.238 [2024-11-26 04:08:29.906923] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:28.238 [2024-11-26 04:08:29.918533] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:28.238 [2024-11-26 04:08:29.918548] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:28.238 [2024-11-26 04:08:29.930531] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:28.238 [2024-11-26 04:08:29.931022] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:28.238 [2024-11-26 04:08:29.938544] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:28.238 04:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.238 04:08:29 -- ublk/ublk.sh@68 -- # ublk_id=1 00:12:28.238 04:08:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:28.238 04:08:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:12:28.238 04:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.238 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.497 04:08:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:12:28.497 04:08:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:12:28.497 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.497 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 [2024-11-26 04:08:30.022604] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:12:28.497 [2024-11-26 04:08:30.022898] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:12:28.497 [2024-11-26 04:08:30.022909] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:12:28.497 [2024-11-26 04:08:30.022915] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:12:28.497 [2024-11-26 04:08:30.034551] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:28.497 [2024-11-26 04:08:30.034569] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:28.497 [2024-11-26 04:08:30.046527] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:28.497 [2024-11-26 04:08:30.047019] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:12:28.497 [2024-11-26 04:08:30.086523] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:12:28.497 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.497 04:08:30 -- ublk/ublk.sh@68 -- # ublk_id=2 00:12:28.497 04:08:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:28.497 04:08:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:12:28.497 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.497 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.497 04:08:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:12:28.497 04:08:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:12:28.497 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.497 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 [2024-11-26 04:08:30.170621] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:12:28.497 [2024-11-26 04:08:30.170910] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:12:28.497 [2024-11-26 04:08:30.170923] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:12:28.497 [2024-11-26 04:08:30.170928] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:12:28.497 [2024-11-26 04:08:30.181550] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:28.497 [2024-11-26 04:08:30.181565] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:28.497 [2024-11-26 04:08:30.194537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:28.497 [2024-11-26 04:08:30.195014] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:12:28.497 [2024-11-26 04:08:30.206526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:12:28.497 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.497 04:08:30 -- ublk/ublk.sh@68 -- # ublk_id=3 00:12:28.497 04:08:30 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:12:28.497 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.497 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.497 04:08:30 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:12:28.497 { 00:12:28.497 "ublk_device": "/dev/ublkb0", 00:12:28.497 "id": 0, 00:12:28.497 "queue_depth": 512, 00:12:28.497 "num_queues": 4, 00:12:28.497 "bdev_name": "Malloc0" 00:12:28.497 }, 00:12:28.497 { 00:12:28.497 "ublk_device": "/dev/ublkb1", 00:12:28.497 "id": 1, 00:12:28.497 "queue_depth": 512, 00:12:28.497 "num_queues": 4, 00:12:28.497 "bdev_name": "Malloc1" 00:12:28.497 }, 00:12:28.497 { 00:12:28.497 "ublk_device": "/dev/ublkb2", 00:12:28.497 "id": 2, 00:12:28.497 "queue_depth": 512, 00:12:28.497 "num_queues": 4, 00:12:28.497 "bdev_name": "Malloc2" 00:12:28.497 }, 00:12:28.497 { 00:12:28.497 "ublk_device": "/dev/ublkb3", 00:12:28.497 "id": 3, 00:12:28.497 "queue_depth": 512, 00:12:28.497 "num_queues": 4, 00:12:28.497 "bdev_name": "Malloc3" 00:12:28.497 } 00:12:28.497 ]' 00:12:28.497 04:08:30 -- ublk/ublk.sh@72 -- # seq 0 3 00:12:28.497 04:08:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:28.497 04:08:30 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:12:28.755 04:08:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:12:28.755 04:08:30 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:12:28.755 04:08:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:12:28.755 04:08:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:12:28.755 04:08:30 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:28.755 04:08:30 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:12:28.755 04:08:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:12:28.755 04:08:30 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:12:28.755 04:08:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:12:28.755 04:08:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:28.755 04:08:30 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:12:29.014 04:08:30 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.014 04:08:30 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:12:29.014 04:08:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:12:29.014 04:08:30 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:12:29.014 04:08:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:12:29.014 04:08:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:12:29.014 04:08:30 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.014 04:08:30 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:12:29.014 04:08:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:12:29.014 04:08:30 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:12:29.272 04:08:30 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:12:29.272 04:08:30 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:12:29.273 04:08:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:12:29.273 04:08:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:12:29.273 04:08:30 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@85 -- # seq 0 3 00:12:29.273 04:08:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.273 04:08:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:12:29.273 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.273 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.273 [2024-11-26 04:08:30.878608] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:29.273 [2024-11-26 04:08:30.919025] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:29.273 [2024-11-26 04:08:30.920069] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:29.273 [2024-11-26 04:08:30.926542] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:29.273 [2024-11-26 04:08:30.926783] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:29.273 [2024-11-26 04:08:30.926796] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:29.273 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.273 04:08:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:12:29.273 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.273 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.273 [2024-11-26 04:08:30.942606] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:12:29.273 [2024-11-26 04:08:30.973940] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:29.273 [2024-11-26 04:08:30.975055] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:12:29.273 [2024-11-26 04:08:30.981529] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:29.273 [2024-11-26 04:08:30.981774] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:12:29.273 [2024-11-26 04:08:30.981787] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:12:29.273 04:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.273 04:08:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.273 04:08:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:12:29.273 04:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.273 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:12:29.273 [2024-11-26 04:08:30.997610] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:12:29.273 [2024-11-26 04:08:31.029571] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:29.273 [2024-11-26 04:08:31.030291] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:12:29.531 [2024-11-26 04:08:31.037536] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:29.531 [2024-11-26 04:08:31.037771] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:12:29.531 [2024-11-26 04:08:31.037783] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:12:29.531 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.531 04:08:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.531 04:08:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:12:29.531 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.531 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.531 [2024-11-26 04:08:31.053583] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:12:29.531 [2024-11-26 04:08:31.086038] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:29.531 [2024-11-26 04:08:31.086943] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:12:29.531 [2024-11-26 04:08:31.093534] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:29.531 [2024-11-26 04:08:31.093763] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:12:29.531 [2024-11-26 04:08:31.093775] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:12:29.531 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.531 04:08:31 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:12:29.531 [2024-11-26 04:08:31.277589] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:29.531 [2024-11-26 04:08:31.278879] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:29.531 [2024-11-26 04:08:31.278908] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:29.790 04:08:31 -- ublk/ublk.sh@93 -- # seq 0 3 00:12:29.790 04:08:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.790 04:08:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:29.790 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.790 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.790 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.790 04:08:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.790 04:08:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:12:29.790 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.790 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.790 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.790 04:08:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.790 04:08:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:29.790 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.790 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.790 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.790 04:08:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:29.790 04:08:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:12:29.790 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.790 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.790 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.790 04:08:31 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:12:29.790 04:08:31 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:29.790 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.790 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:29.791 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.791 04:08:31 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:29.791 04:08:31 -- lvol/common.sh@26 -- # jq length 00:12:30.049 04:08:31 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:30.049 04:08:31 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:30.049 04:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.049 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:30.049 04:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.049 04:08:31 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:30.049 04:08:31 -- lvol/common.sh@28 -- # jq length 00:12:30.049 04:08:31 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:30.049 00:12:30.049 real 0m1.954s 00:12:30.049 user 0m0.805s 00:12:30.049 sys 0m0.142s 00:12:30.049 04:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.049 ************************************ 00:12:30.049 END TEST test_create_multi_ublk 00:12:30.049 ************************************ 00:12:30.049 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:12:30.049 04:08:31 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:12:30.049 04:08:31 -- ublk/ublk.sh@147 -- # cleanup 00:12:30.049 04:08:31 -- ublk/ublk.sh@130 -- # killprocess 80565 00:12:30.049 04:08:31 -- common/autotest_common.sh@936 -- # '[' -z 80565 ']' 00:12:30.049 04:08:31 -- common/autotest_common.sh@940 -- # kill -0 80565 00:12:30.049 04:08:31 -- common/autotest_common.sh@941 -- # uname 00:12:30.049 04:08:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.049 04:08:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80565 00:12:30.049 04:08:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.049 killing process with pid 80565 00:12:30.049 04:08:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.049 04:08:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80565' 00:12:30.049 04:08:31 -- common/autotest_common.sh@955 -- # kill 80565 00:12:30.049 04:08:31 -- common/autotest_common.sh@960 -- # wait 80565 00:12:30.307 [2024-11-26 04:08:31.836697] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:30.307 [2024-11-26 04:08:31.836743] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:30.566 00:12:30.566 real 0m17.501s 00:12:30.566 user 0m27.399s 00:12:30.566 sys 0m7.602s 00:12:30.566 04:08:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.566 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.566 ************************************ 00:12:30.566 END TEST ublk 00:12:30.566 ************************************ 00:12:30.566 04:08:32 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:30.566 04:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.566 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.566 ************************************ 00:12:30.566 START TEST ublk_recovery 00:12:30.566 ************************************ 00:12:30.566 04:08:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:30.566 * Looking for test storage... 00:12:30.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:30.566 04:08:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:30.566 04:08:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:30.566 04:08:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:30.566 04:08:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:30.566 04:08:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:30.566 04:08:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:30.566 04:08:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:30.566 04:08:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:30.566 04:08:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.566 04:08:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:30.566 04:08:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:30.566 04:08:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:30.566 04:08:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:30.566 04:08:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:30.566 04:08:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:30.566 04:08:32 -- scripts/common.sh@344 -- # : 1 00:12:30.566 04:08:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:30.566 04:08:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.566 04:08:32 -- scripts/common.sh@364 -- # decimal 1 00:12:30.566 04:08:32 -- scripts/common.sh@352 -- # local d=1 00:12:30.566 04:08:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.566 04:08:32 -- scripts/common.sh@354 -- # echo 1 00:12:30.566 04:08:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:30.566 04:08:32 -- scripts/common.sh@365 -- # decimal 2 00:12:30.566 04:08:32 -- scripts/common.sh@352 -- # local d=2 00:12:30.566 04:08:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.566 04:08:32 -- scripts/common.sh@354 -- # echo 2 00:12:30.566 04:08:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:30.566 04:08:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:30.566 04:08:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:30.566 04:08:32 -- scripts/common.sh@367 -- # return 0 00:12:30.566 04:08:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:30.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.566 --rc genhtml_branch_coverage=1 00:12:30.566 --rc genhtml_function_coverage=1 00:12:30.566 --rc genhtml_legend=1 00:12:30.566 --rc geninfo_all_blocks=1 00:12:30.566 --rc geninfo_unexecuted_blocks=1 00:12:30.566 00:12:30.566 ' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:30.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.566 --rc genhtml_branch_coverage=1 00:12:30.566 --rc genhtml_function_coverage=1 00:12:30.566 --rc genhtml_legend=1 00:12:30.566 --rc geninfo_all_blocks=1 00:12:30.566 --rc geninfo_unexecuted_blocks=1 00:12:30.566 00:12:30.566 ' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:30.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.566 --rc genhtml_branch_coverage=1 00:12:30.566 --rc genhtml_function_coverage=1 00:12:30.566 --rc genhtml_legend=1 00:12:30.566 --rc geninfo_all_blocks=1 00:12:30.566 --rc geninfo_unexecuted_blocks=1 00:12:30.566 00:12:30.566 ' 00:12:30.566 04:08:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:30.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.566 --rc genhtml_branch_coverage=1 00:12:30.566 --rc genhtml_function_coverage=1 00:12:30.566 --rc genhtml_legend=1 00:12:30.566 --rc geninfo_all_blocks=1 00:12:30.566 --rc geninfo_unexecuted_blocks=1 00:12:30.566 00:12:30.566 ' 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:30.566 04:08:32 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:30.566 04:08:32 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:30.566 04:08:32 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:30.566 04:08:32 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:30.566 04:08:32 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:30.566 04:08:32 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:30.566 04:08:32 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:30.566 04:08:32 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:12:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=80928 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:30.566 04:08:32 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 80928 00:12:30.566 04:08:32 -- common/autotest_common.sh@829 -- # '[' -z 80928 ']' 00:12:30.566 04:08:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.566 04:08:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.566 04:08:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.566 04:08:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.566 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.825 [2024-11-26 04:08:32.346241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:30.825 [2024-11-26 04:08:32.346356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80928 ] 00:12:30.825 [2024-11-26 04:08:32.490770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:30.825 [2024-11-26 04:08:32.526697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:30.825 [2024-11-26 04:08:32.527142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.825 [2024-11-26 04:08:32.527206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.761 04:08:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.761 04:08:33 -- common/autotest_common.sh@862 -- # return 0 00:12:31.761 04:08:33 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:12:31.761 04:08:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.761 04:08:33 -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 [2024-11-26 04:08:33.169406] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:31.761 04:08:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.761 04:08:33 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:12:31.761 04:08:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.761 04:08:33 -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 malloc0 00:12:31.761 04:08:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.761 04:08:33 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:12:31.761 04:08:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.761 04:08:33 -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 [2024-11-26 04:08:33.200636] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:12:31.761 [2024-11-26 04:08:33.200733] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:12:31.761 [2024-11-26 04:08:33.200745] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:31.761 [2024-11-26 04:08:33.200751] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:31.761 [2024-11-26 04:08:33.209600] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:31.761 [2024-11-26 04:08:33.209623] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:31.761 [2024-11-26 04:08:33.216529] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:31.761 [2024-11-26 04:08:33.216647] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:31.761 [2024-11-26 04:08:33.238529] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:31.761 1 00:12:31.761 04:08:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.761 04:08:33 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:12:32.696 04:08:34 -- ublk/ublk_recovery.sh@31 -- # fio_proc=80961 00:12:32.696 04:08:34 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:12:32.696 04:08:34 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:12:32.696 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:32.696 fio-3.35 00:12:32.696 Starting 1 process 00:12:37.964 04:08:39 -- ublk/ublk_recovery.sh@36 -- # kill -9 80928 00:12:37.964 04:08:39 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:12:43.229 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 80928 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:12:43.229 04:08:44 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=81072 00:12:43.229 04:08:44 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:43.229 04:08:44 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:43.229 04:08:44 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 81072 00:12:43.229 04:08:44 -- common/autotest_common.sh@829 -- # '[' -z 81072 ']' 00:12:43.229 04:08:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.229 04:08:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.229 04:08:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.229 04:08:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.229 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:12:43.229 [2024-11-26 04:08:44.326572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.229 [2024-11-26 04:08:44.326690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81072 ] 00:12:43.229 [2024-11-26 04:08:44.473978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:43.229 [2024-11-26 04:08:44.505008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:43.229 [2024-11-26 04:08:44.505517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.229 [2024-11-26 04:08:44.505574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.487 04:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.487 04:08:45 -- common/autotest_common.sh@862 -- # return 0 00:12:43.487 04:08:45 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:12:43.487 04:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.487 04:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:43.487 [2024-11-26 04:08:45.145521] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:43.487 04:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.487 04:08:45 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:12:43.487 04:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.487 04:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:43.487 malloc0 00:12:43.488 04:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.488 04:08:45 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:12:43.488 04:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.488 04:08:45 -- common/autotest_common.sh@10 -- # set +x 00:12:43.488 [2024-11-26 04:08:45.176645] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:12:43.488 [2024-11-26 04:08:45.176688] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:43.488 [2024-11-26 04:08:45.176698] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:12:43.488 [2024-11-26 04:08:45.184558] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:12:43.488 [2024-11-26 04:08:45.184578] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:12:43.488 1 00:12:43.488 [2024-11-26 04:08:45.184659] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:12:43.488 04:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.488 04:08:45 -- ublk/ublk_recovery.sh@52 -- # wait 80961 00:12:43.488 [2024-11-26 04:08:45.192538] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:12:43.488 [2024-11-26 04:08:45.195876] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:12:43.488 [2024-11-26 04:08:45.199755] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:12:43.488 [2024-11-26 04:08:45.199775] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:13:39.728 00:13:39.728 fio_test: (groupid=0, jobs=1): err= 0: pid=80964: Tue Nov 26 04:09:34 2024 00:13:39.728 read: IOPS=27.9k, BW=109MiB/s (114MB/s)(6537MiB/60001msec) 00:13:39.728 slat (nsec): min=929, max=969074, avg=4911.89, stdev=1568.86 00:13:39.728 clat (usec): min=592, max=5955.9k, avg=2297.44, stdev=40379.78 00:13:39.728 lat (usec): min=649, max=5955.9k, avg=2302.35, stdev=40379.77 00:13:39.728 clat percentiles (usec): 00:13:39.728 | 1.00th=[ 1696], 5.00th=[ 1811], 10.00th=[ 1827], 20.00th=[ 1860], 00:13:39.728 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1909], 60.00th=[ 1926], 00:13:39.728 | 70.00th=[ 1942], 80.00th=[ 1958], 90.00th=[ 2024], 95.00th=[ 2769], 00:13:39.728 | 99.00th=[ 4752], 99.50th=[ 5473], 99.90th=[ 6783], 99.95th=[ 8094], 00:13:39.728 | 99.99th=[12911] 00:13:39.728 bw ( KiB/s): min=24744, max=128816, per=100.00%, avg=122922.75, stdev=13820.88, samples=108 00:13:39.728 iops : min= 6186, max=32204, avg=30730.69, stdev=3455.22, samples=108 00:13:39.728 write: IOPS=27.9k, BW=109MiB/s (114MB/s)(6532MiB/60001msec); 0 zone resets 00:13:39.728 slat (nsec): min=915, max=141131, avg=4988.37, stdev=1402.55 00:13:39.728 clat (usec): min=535, max=5955.9k, avg=2282.45, stdev=32876.94 00:13:39.728 lat (usec): min=540, max=5955.9k, avg=2287.44, stdev=32876.94 00:13:39.728 clat percentiles (usec): 00:13:39.728 | 1.00th=[ 1745], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:13:39.728 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:13:39.728 | 70.00th=[ 2024], 80.00th=[ 2057], 90.00th=[ 2114], 95.00th=[ 2671], 00:13:39.728 | 99.00th=[ 4752], 99.50th=[ 5473], 99.90th=[ 6652], 99.95th=[ 8160], 00:13:39.728 | 99.99th=[13042] 00:13:39.728 bw ( KiB/s): min=23984, max=129712, per=100.00%, avg=122821.54, stdev=13944.25, samples=108 00:13:39.728 iops : min= 5996, max=32428, avg=30705.38, stdev=3486.06, samples=108 00:13:39.728 lat (usec) : 750=0.01%, 1000=0.01% 00:13:39.728 lat (msec) : 2=70.77%, 4=26.98%, 10=2.22%, 20=0.02%, >=2000=0.01% 00:13:39.728 cpu : usr=6.15%, sys=28.36%, ctx=111106, majf=0, minf=14 00:13:39.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:39.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:39.728 issued rwts: total=1673598,1672287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:39.728 00:13:39.728 Run status group 0 (all jobs): 00:13:39.728 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=6537MiB (6855MB), run=60001-60001msec 00:13:39.728 WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=6532MiB (6850MB), run=60001-60001msec 00:13:39.728 00:13:39.728 Disk stats (read/write): 00:13:39.728 ublkb1: ios=1670268/1668917, merge=0/0, ticks=3752802/3589133, in_queue=7341936, util=99.89% 00:13:39.728 04:09:34 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:13:39.728 04:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.728 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 [2024-11-26 04:09:34.504246] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:39.728 [2024-11-26 04:09:34.532538] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:39.728 [2024-11-26 04:09:34.532683] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:39.728 [2024-11-26 04:09:34.541537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:39.728 [2024-11-26 04:09:34.541630] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:39.728 [2024-11-26 04:09:34.541640] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:39.728 04:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.728 04:09:34 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:13:39.728 04:09:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.728 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 [2024-11-26 04:09:34.553604] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:39.728 [2024-11-26 04:09:34.554490] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:39.728 [2024-11-26 04:09:34.554529] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:39.728 04:09:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.728 04:09:34 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:13:39.728 04:09:34 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:13:39.728 04:09:34 -- ublk/ublk_recovery.sh@14 -- # killprocess 81072 00:13:39.728 04:09:34 -- common/autotest_common.sh@936 -- # '[' -z 81072 ']' 00:13:39.728 04:09:34 -- common/autotest_common.sh@940 -- # kill -0 81072 00:13:39.728 04:09:34 -- common/autotest_common.sh@941 -- # uname 00:13:39.728 04:09:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.728 04:09:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81072 00:13:39.728 04:09:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:39.728 killing process with pid 81072 00:13:39.728 04:09:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:39.728 04:09:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81072' 00:13:39.728 04:09:34 -- common/autotest_common.sh@955 -- # kill 81072 00:13:39.728 04:09:34 -- common/autotest_common.sh@960 -- # wait 81072 00:13:39.728 [2024-11-26 04:09:34.825521] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:39.728 [2024-11-26 04:09:34.825585] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:39.728 ************************************ 00:13:39.728 END TEST ublk_recovery 00:13:39.728 00:13:39.728 real 1m3.051s 00:13:39.728 user 1m40.806s 00:13:39.728 sys 0m35.341s 00:13:39.728 04:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:39.728 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 ************************************ 00:13:39.728 04:09:35 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@255 -- # timing_exit lib 00:13:39.728 04:09:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.728 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 04:09:35 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:13:39.728 04:09:35 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:39.728 04:09:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.728 04:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.728 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 ************************************ 00:13:39.728 START TEST ftl 00:13:39.728 ************************************ 00:13:39.728 04:09:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:39.728 * Looking for test storage... 00:13:39.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.728 04:09:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:39.728 04:09:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:39.728 04:09:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:39.728 04:09:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:39.728 04:09:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:39.728 04:09:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:39.728 04:09:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:39.728 04:09:35 -- scripts/common.sh@335 -- # IFS=.-: 00:13:39.728 04:09:35 -- scripts/common.sh@335 -- # read -ra ver1 00:13:39.728 04:09:35 -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.728 04:09:35 -- scripts/common.sh@336 -- # read -ra ver2 00:13:39.728 04:09:35 -- scripts/common.sh@337 -- # local 'op=<' 00:13:39.728 04:09:35 -- scripts/common.sh@339 -- # ver1_l=2 00:13:39.728 04:09:35 -- scripts/common.sh@340 -- # ver2_l=1 00:13:39.728 04:09:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:39.728 04:09:35 -- scripts/common.sh@343 -- # case "$op" in 00:13:39.728 04:09:35 -- scripts/common.sh@344 -- # : 1 00:13:39.728 04:09:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:39.728 04:09:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.728 04:09:35 -- scripts/common.sh@364 -- # decimal 1 00:13:39.728 04:09:35 -- scripts/common.sh@352 -- # local d=1 00:13:39.728 04:09:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.728 04:09:35 -- scripts/common.sh@354 -- # echo 1 00:13:39.728 04:09:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:39.728 04:09:35 -- scripts/common.sh@365 -- # decimal 2 00:13:39.728 04:09:35 -- scripts/common.sh@352 -- # local d=2 00:13:39.728 04:09:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.728 04:09:35 -- scripts/common.sh@354 -- # echo 2 00:13:39.728 04:09:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:39.728 04:09:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:39.728 04:09:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:39.728 04:09:35 -- scripts/common.sh@367 -- # return 0 00:13:39.728 04:09:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.728 04:09:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.728 --rc genhtml_branch_coverage=1 00:13:39.728 --rc genhtml_function_coverage=1 00:13:39.728 --rc genhtml_legend=1 00:13:39.728 --rc geninfo_all_blocks=1 00:13:39.728 --rc geninfo_unexecuted_blocks=1 00:13:39.728 00:13:39.728 ' 00:13:39.729 04:09:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:39.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.729 --rc genhtml_branch_coverage=1 00:13:39.729 --rc genhtml_function_coverage=1 00:13:39.729 --rc genhtml_legend=1 00:13:39.729 --rc geninfo_all_blocks=1 00:13:39.729 --rc geninfo_unexecuted_blocks=1 00:13:39.729 00:13:39.729 ' 00:13:39.729 04:09:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:39.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.729 --rc genhtml_branch_coverage=1 00:13:39.729 --rc genhtml_function_coverage=1 00:13:39.729 --rc genhtml_legend=1 00:13:39.729 --rc geninfo_all_blocks=1 00:13:39.729 --rc geninfo_unexecuted_blocks=1 00:13:39.729 00:13:39.729 ' 00:13:39.729 04:09:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:39.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.729 --rc genhtml_branch_coverage=1 00:13:39.729 --rc genhtml_function_coverage=1 00:13:39.729 --rc genhtml_legend=1 00:13:39.729 --rc geninfo_all_blocks=1 00:13:39.729 --rc geninfo_unexecuted_blocks=1 00:13:39.729 00:13:39.729 ' 00:13:39.729 04:09:35 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:39.729 04:09:35 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:39.729 04:09:35 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.729 04:09:35 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.729 04:09:35 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:39.729 04:09:35 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:39.729 04:09:35 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.729 04:09:35 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:39.729 04:09:35 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:39.729 04:09:35 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.729 04:09:35 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.729 04:09:35 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:39.729 04:09:35 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:39.729 04:09:35 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:39.729 04:09:35 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:39.729 04:09:35 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:39.729 04:09:35 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:39.729 04:09:35 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.729 04:09:35 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.729 04:09:35 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:39.729 04:09:35 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:39.729 04:09:35 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:39.729 04:09:35 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:39.729 04:09:35 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:39.729 04:09:35 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:39.729 04:09:35 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:39.729 04:09:35 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:39.729 04:09:35 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.729 04:09:35 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.729 04:09:35 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.729 04:09:35 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:13:39.729 04:09:35 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:13:39.729 04:09:35 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:13:39.729 04:09:35 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:13:39.729 04:09:35 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:39.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.729 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.729 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.729 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.729 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.729 04:09:35 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=81868 00:13:39.729 04:09:35 -- ftl/ftl.sh@38 -- # waitforlisten 81868 00:13:39.729 04:09:35 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:39.729 04:09:35 -- common/autotest_common.sh@829 -- # '[' -z 81868 ']' 00:13:39.729 04:09:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.729 04:09:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.729 04:09:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.729 04:09:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.729 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:13:39.729 [2024-11-26 04:09:35.906794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.729 [2024-11-26 04:09:35.907410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81868 ] 00:13:39.729 [2024-11-26 04:09:36.051576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.729 [2024-11-26 04:09:36.090820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.729 [2024-11-26 04:09:36.090997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.729 04:09:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.729 04:09:36 -- common/autotest_common.sh@862 -- # return 0 00:13:39.729 04:09:36 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:13:39.729 04:09:36 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:39.729 04:09:37 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:13:39.729 04:09:37 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:39.729 04:09:37 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:13:39.729 04:09:37 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:39.729 04:09:37 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:39.729 04:09:37 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:13:39.729 04:09:37 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:13:39.729 04:09:37 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:13:39.729 04:09:37 -- ftl/ftl.sh@50 -- # break 00:13:39.729 04:09:37 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:13:39.729 04:09:37 -- ftl/ftl.sh@59 -- # base_size=1310720 00:13:39.729 04:09:37 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:39.729 04:09:37 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:39.729 04:09:38 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:13:39.729 04:09:38 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:13:39.729 04:09:38 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:13:39.729 04:09:38 -- ftl/ftl.sh@63 -- # break 00:13:39.729 04:09:38 -- ftl/ftl.sh@66 -- # killprocess 81868 00:13:39.729 04:09:38 -- common/autotest_common.sh@936 -- # '[' -z 81868 ']' 00:13:39.729 04:09:38 -- common/autotest_common.sh@940 -- # kill -0 81868 00:13:39.729 04:09:38 -- common/autotest_common.sh@941 -- # uname 00:13:39.729 04:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.729 04:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81868 00:13:39.729 killing process with pid 81868 00:13:39.729 04:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:39.729 04:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:39.729 04:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81868' 00:13:39.729 04:09:38 -- common/autotest_common.sh@955 -- # kill 81868 00:13:39.729 04:09:38 -- common/autotest_common.sh@960 -- # wait 81868 00:13:39.729 04:09:38 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:13:39.729 04:09:38 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:13:39.729 04:09:38 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:39.729 04:09:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:39.729 04:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.729 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.729 ************************************ 00:13:39.729 START TEST ftl_fio_basic 00:13:39.729 ************************************ 00:13:39.729 04:09:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:39.729 * Looking for test storage... 00:13:39.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.729 04:09:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:39.729 04:09:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:39.729 04:09:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:39.729 04:09:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:39.729 04:09:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:39.729 04:09:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:39.729 04:09:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:39.729 04:09:38 -- scripts/common.sh@335 -- # IFS=.-: 00:13:39.729 04:09:38 -- scripts/common.sh@335 -- # read -ra ver1 00:13:39.729 04:09:38 -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.729 04:09:38 -- scripts/common.sh@336 -- # read -ra ver2 00:13:39.729 04:09:38 -- scripts/common.sh@337 -- # local 'op=<' 00:13:39.729 04:09:38 -- scripts/common.sh@339 -- # ver1_l=2 00:13:39.729 04:09:38 -- scripts/common.sh@340 -- # ver2_l=1 00:13:39.729 04:09:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:39.729 04:09:38 -- scripts/common.sh@343 -- # case "$op" in 00:13:39.729 04:09:38 -- scripts/common.sh@344 -- # : 1 00:13:39.729 04:09:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:39.729 04:09:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.729 04:09:38 -- scripts/common.sh@364 -- # decimal 1 00:13:39.729 04:09:38 -- scripts/common.sh@352 -- # local d=1 00:13:39.729 04:09:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.729 04:09:38 -- scripts/common.sh@354 -- # echo 1 00:13:39.729 04:09:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:39.729 04:09:38 -- scripts/common.sh@365 -- # decimal 2 00:13:39.729 04:09:38 -- scripts/common.sh@352 -- # local d=2 00:13:39.729 04:09:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.730 04:09:38 -- scripts/common.sh@354 -- # echo 2 00:13:39.730 04:09:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:39.730 04:09:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:39.730 04:09:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:39.730 04:09:38 -- scripts/common.sh@367 -- # return 0 00:13:39.730 04:09:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.730 04:09:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.730 --rc genhtml_branch_coverage=1 00:13:39.730 --rc genhtml_function_coverage=1 00:13:39.730 --rc genhtml_legend=1 00:13:39.730 --rc geninfo_all_blocks=1 00:13:39.730 --rc geninfo_unexecuted_blocks=1 00:13:39.730 00:13:39.730 ' 00:13:39.730 04:09:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.730 --rc genhtml_branch_coverage=1 00:13:39.730 --rc genhtml_function_coverage=1 00:13:39.730 --rc genhtml_legend=1 00:13:39.730 --rc geninfo_all_blocks=1 00:13:39.730 --rc geninfo_unexecuted_blocks=1 00:13:39.730 00:13:39.730 ' 00:13:39.730 04:09:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.730 --rc genhtml_branch_coverage=1 00:13:39.730 --rc genhtml_function_coverage=1 00:13:39.730 --rc genhtml_legend=1 00:13:39.730 --rc geninfo_all_blocks=1 00:13:39.730 --rc geninfo_unexecuted_blocks=1 00:13:39.730 00:13:39.730 ' 00:13:39.730 04:09:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:39.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.730 --rc genhtml_branch_coverage=1 00:13:39.730 --rc genhtml_function_coverage=1 00:13:39.730 --rc genhtml_legend=1 00:13:39.730 --rc geninfo_all_blocks=1 00:13:39.730 --rc geninfo_unexecuted_blocks=1 00:13:39.730 00:13:39.730 ' 00:13:39.730 04:09:38 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:39.730 04:09:38 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:13:39.730 04:09:38 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.730 04:09:38 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:39.730 04:09:38 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:39.730 04:09:38 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:39.730 04:09:38 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.730 04:09:38 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:39.730 04:09:38 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:39.730 04:09:38 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.730 04:09:38 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.730 04:09:38 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:39.730 04:09:38 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:39.730 04:09:38 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:39.730 04:09:38 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:39.730 04:09:38 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:39.730 04:09:38 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:39.730 04:09:38 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.730 04:09:38 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.730 04:09:38 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:39.730 04:09:38 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:39.730 04:09:38 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:39.730 04:09:38 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:39.730 04:09:38 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:39.730 04:09:38 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:39.730 04:09:38 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:39.730 04:09:38 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:39.730 04:09:38 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.730 04:09:38 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.730 04:09:38 -- ftl/fio.sh@11 -- # declare -A suite 00:13:39.730 04:09:38 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:39.730 04:09:38 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:13:39.730 04:09:38 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:13:39.730 04:09:38 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.730 04:09:38 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:13:39.730 04:09:38 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:13:39.730 04:09:38 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:39.730 04:09:38 -- ftl/fio.sh@26 -- # uuid= 00:13:39.730 04:09:38 -- ftl/fio.sh@27 -- # timeout=240 00:13:39.730 04:09:38 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:13:39.730 04:09:38 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:13:39.730 04:09:38 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:13:39.730 04:09:38 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:13:39.730 04:09:38 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:39.730 04:09:38 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:39.730 04:09:38 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:13:39.730 04:09:38 -- ftl/fio.sh@45 -- # svcpid=81988 00:13:39.730 04:09:38 -- ftl/fio.sh@46 -- # waitforlisten 81988 00:13:39.730 04:09:38 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:13:39.730 04:09:38 -- common/autotest_common.sh@829 -- # '[' -z 81988 ']' 00:13:39.730 04:09:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.730 04:09:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.730 04:09:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.730 04:09:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.730 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.730 [2024-11-26 04:09:38.594930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.730 [2024-11-26 04:09:38.595516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81988 ] 00:13:39.730 [2024-11-26 04:09:38.743038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:39.730 [2024-11-26 04:09:38.772413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.730 [2024-11-26 04:09:38.772921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.730 [2024-11-26 04:09:38.773162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.730 [2024-11-26 04:09:38.773229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.730 04:09:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.730 04:09:39 -- common/autotest_common.sh@862 -- # return 0 00:13:39.730 04:09:39 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:13:39.730 04:09:39 -- ftl/common.sh@54 -- # local name=nvme0 00:13:39.730 04:09:39 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:13:39.730 04:09:39 -- ftl/common.sh@56 -- # local size=103424 00:13:39.730 04:09:39 -- ftl/common.sh@59 -- # local base_bdev 00:13:39.730 04:09:39 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:13:39.730 04:09:39 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:13:39.730 04:09:39 -- ftl/common.sh@62 -- # local base_size 00:13:39.730 04:09:39 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:13:39.730 04:09:39 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:13:39.730 04:09:39 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:39.730 04:09:39 -- common/autotest_common.sh@1369 -- # local bs 00:13:39.730 04:09:39 -- common/autotest_common.sh@1370 -- # local nb 00:13:39.730 04:09:39 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:13:39.730 04:09:39 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:39.730 { 00:13:39.730 "name": "nvme0n1", 00:13:39.730 "aliases": [ 00:13:39.730 "d3c7b93b-fc58-4ba5-9f75-9d23e47f3fb0" 00:13:39.730 ], 00:13:39.730 "product_name": "NVMe disk", 00:13:39.730 "block_size": 4096, 00:13:39.730 "num_blocks": 1310720, 00:13:39.730 "uuid": "d3c7b93b-fc58-4ba5-9f75-9d23e47f3fb0", 00:13:39.730 "assigned_rate_limits": { 00:13:39.730 "rw_ios_per_sec": 0, 00:13:39.730 "rw_mbytes_per_sec": 0, 00:13:39.730 "r_mbytes_per_sec": 0, 00:13:39.730 "w_mbytes_per_sec": 0 00:13:39.730 }, 00:13:39.730 "claimed": false, 00:13:39.730 "zoned": false, 00:13:39.730 "supported_io_types": { 00:13:39.730 "read": true, 00:13:39.730 "write": true, 00:13:39.730 "unmap": true, 00:13:39.730 "write_zeroes": true, 00:13:39.730 "flush": true, 00:13:39.730 "reset": true, 00:13:39.730 "compare": true, 00:13:39.730 "compare_and_write": false, 00:13:39.730 "abort": true, 00:13:39.730 "nvme_admin": true, 00:13:39.730 "nvme_io": true 00:13:39.730 }, 00:13:39.730 "driver_specific": { 00:13:39.730 "nvme": [ 00:13:39.730 { 00:13:39.730 "pci_address": "0000:00:07.0", 00:13:39.730 "trid": { 00:13:39.730 "trtype": "PCIe", 00:13:39.730 "traddr": "0000:00:07.0" 00:13:39.730 }, 00:13:39.730 "ctrlr_data": { 00:13:39.730 "cntlid": 0, 00:13:39.730 "vendor_id": "0x1b36", 00:13:39.730 "model_number": "QEMU NVMe Ctrl", 00:13:39.730 "serial_number": "12341", 00:13:39.730 "firmware_revision": "8.0.0", 00:13:39.730 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:39.730 "oacs": { 00:13:39.730 "security": 0, 00:13:39.730 "format": 1, 00:13:39.730 "firmware": 0, 00:13:39.731 "ns_manage": 1 00:13:39.731 }, 00:13:39.731 "multi_ctrlr": false, 00:13:39.731 "ana_reporting": false 00:13:39.731 }, 00:13:39.731 "vs": { 00:13:39.731 "nvme_version": "1.4" 00:13:39.731 }, 00:13:39.731 "ns_data": { 00:13:39.731 "id": 1, 00:13:39.731 "can_share": false 00:13:39.731 } 00:13:39.731 } 00:13:39.731 ], 00:13:39.731 "mp_policy": "active_passive" 00:13:39.731 } 00:13:39.731 } 00:13:39.731 ]' 00:13:39.731 04:09:39 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:39.731 04:09:39 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:39.731 04:09:39 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:39.731 04:09:39 -- common/autotest_common.sh@1373 -- # nb=1310720 00:13:39.731 04:09:39 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:13:39.731 04:09:39 -- common/autotest_common.sh@1377 -- # echo 5120 00:13:39.731 04:09:39 -- ftl/common.sh@63 -- # base_size=5120 00:13:39.731 04:09:39 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:13:39.731 04:09:39 -- ftl/common.sh@67 -- # clear_lvols 00:13:39.731 04:09:39 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:13:39.731 04:09:39 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:39.731 04:09:40 -- ftl/common.sh@28 -- # stores= 00:13:39.731 04:09:40 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:13:39.731 04:09:40 -- ftl/common.sh@68 -- # lvs=47aaf202-b531-487f-a3ac-c5d398af2ed1 00:13:39.731 04:09:40 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 47aaf202-b531-487f-a3ac-c5d398af2ed1 00:13:39.731 04:09:40 -- ftl/fio.sh@48 -- # split_bdev=8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- ftl/common.sh@35 -- # local name=nvc0 00:13:39.731 04:09:40 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:13:39.731 04:09:40 -- ftl/common.sh@37 -- # local base_bdev=8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- ftl/common.sh@38 -- # local cache_size= 00:13:39.731 04:09:40 -- ftl/common.sh@41 -- # get_bdev_size 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- common/autotest_common.sh@1367 -- # local bdev_name=8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:39.731 04:09:40 -- common/autotest_common.sh@1369 -- # local bs 00:13:39.731 04:09:40 -- common/autotest_common.sh@1370 -- # local nb 00:13:39.731 04:09:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:39.731 { 00:13:39.731 "name": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.731 "aliases": [ 00:13:39.731 "lvs/nvme0n1p0" 00:13:39.731 ], 00:13:39.731 "product_name": "Logical Volume", 00:13:39.731 "block_size": 4096, 00:13:39.731 "num_blocks": 26476544, 00:13:39.731 "uuid": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.731 "assigned_rate_limits": { 00:13:39.731 "rw_ios_per_sec": 0, 00:13:39.731 "rw_mbytes_per_sec": 0, 00:13:39.731 "r_mbytes_per_sec": 0, 00:13:39.731 "w_mbytes_per_sec": 0 00:13:39.731 }, 00:13:39.731 "claimed": false, 00:13:39.731 "zoned": false, 00:13:39.731 "supported_io_types": { 00:13:39.731 "read": true, 00:13:39.731 "write": true, 00:13:39.731 "unmap": true, 00:13:39.731 "write_zeroes": true, 00:13:39.731 "flush": false, 00:13:39.731 "reset": true, 00:13:39.731 "compare": false, 00:13:39.731 "compare_and_write": false, 00:13:39.731 "abort": false, 00:13:39.731 "nvme_admin": false, 00:13:39.731 "nvme_io": false 00:13:39.731 }, 00:13:39.731 "driver_specific": { 00:13:39.731 "lvol": { 00:13:39.731 "lvol_store_uuid": "47aaf202-b531-487f-a3ac-c5d398af2ed1", 00:13:39.731 "base_bdev": "nvme0n1", 00:13:39.731 "thin_provision": true, 00:13:39.731 "snapshot": false, 00:13:39.731 "clone": false, 00:13:39.731 "esnap_clone": false 00:13:39.731 } 00:13:39.731 } 00:13:39.731 } 00:13:39.731 ]' 00:13:39.731 04:09:40 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:39.731 04:09:40 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:39.731 04:09:40 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:39.731 04:09:40 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:39.731 04:09:40 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:39.731 04:09:40 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:39.731 04:09:40 -- ftl/common.sh@41 -- # local base_size=5171 00:13:39.731 04:09:40 -- ftl/common.sh@44 -- # local nvc_bdev 00:13:39.731 04:09:40 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:13:39.731 04:09:40 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:13:39.731 04:09:40 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:13:39.731 04:09:40 -- ftl/common.sh@48 -- # get_bdev_size 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- common/autotest_common.sh@1367 -- # local bdev_name=8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:39.731 04:09:40 -- common/autotest_common.sh@1369 -- # local bs 00:13:39.731 04:09:40 -- common/autotest_common.sh@1370 -- # local nb 00:13:39.731 04:09:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:39.731 { 00:13:39.731 "name": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.731 "aliases": [ 00:13:39.731 "lvs/nvme0n1p0" 00:13:39.731 ], 00:13:39.731 "product_name": "Logical Volume", 00:13:39.731 "block_size": 4096, 00:13:39.731 "num_blocks": 26476544, 00:13:39.731 "uuid": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.731 "assigned_rate_limits": { 00:13:39.731 "rw_ios_per_sec": 0, 00:13:39.731 "rw_mbytes_per_sec": 0, 00:13:39.731 "r_mbytes_per_sec": 0, 00:13:39.731 "w_mbytes_per_sec": 0 00:13:39.731 }, 00:13:39.731 "claimed": false, 00:13:39.731 "zoned": false, 00:13:39.731 "supported_io_types": { 00:13:39.731 "read": true, 00:13:39.731 "write": true, 00:13:39.731 "unmap": true, 00:13:39.731 "write_zeroes": true, 00:13:39.731 "flush": false, 00:13:39.731 "reset": true, 00:13:39.731 "compare": false, 00:13:39.731 "compare_and_write": false, 00:13:39.731 "abort": false, 00:13:39.731 "nvme_admin": false, 00:13:39.731 "nvme_io": false 00:13:39.731 }, 00:13:39.731 "driver_specific": { 00:13:39.731 "lvol": { 00:13:39.731 "lvol_store_uuid": "47aaf202-b531-487f-a3ac-c5d398af2ed1", 00:13:39.731 "base_bdev": "nvme0n1", 00:13:39.731 "thin_provision": true, 00:13:39.731 "snapshot": false, 00:13:39.731 "clone": false, 00:13:39.731 "esnap_clone": false 00:13:39.731 } 00:13:39.731 } 00:13:39.731 } 00:13:39.731 ]' 00:13:39.731 04:09:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:39.731 04:09:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:39.731 04:09:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:39.731 04:09:41 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:39.731 04:09:41 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:39.731 04:09:41 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:39.731 04:09:41 -- ftl/common.sh@48 -- # cache_size=5171 00:13:39.731 04:09:41 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:13:39.731 04:09:41 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:13:39.731 04:09:41 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:13:39.731 04:09:41 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:13:39.731 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:13:39.731 04:09:41 -- ftl/fio.sh@56 -- # get_bdev_size 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:41 -- common/autotest_common.sh@1367 -- # local bdev_name=8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.731 04:09:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:39.731 04:09:41 -- common/autotest_common.sh@1369 -- # local bs 00:13:39.731 04:09:41 -- common/autotest_common.sh@1370 -- # local nb 00:13:39.731 04:09:41 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc 00:13:39.990 04:09:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:39.990 { 00:13:39.990 "name": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.990 "aliases": [ 00:13:39.990 "lvs/nvme0n1p0" 00:13:39.990 ], 00:13:39.990 "product_name": "Logical Volume", 00:13:39.990 "block_size": 4096, 00:13:39.990 "num_blocks": 26476544, 00:13:39.990 "uuid": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:39.990 "assigned_rate_limits": { 00:13:39.990 "rw_ios_per_sec": 0, 00:13:39.990 "rw_mbytes_per_sec": 0, 00:13:39.990 "r_mbytes_per_sec": 0, 00:13:39.990 "w_mbytes_per_sec": 0 00:13:39.990 }, 00:13:39.990 "claimed": false, 00:13:39.990 "zoned": false, 00:13:39.990 "supported_io_types": { 00:13:39.990 "read": true, 00:13:39.990 "write": true, 00:13:39.990 "unmap": true, 00:13:39.990 "write_zeroes": true, 00:13:39.990 "flush": false, 00:13:39.990 "reset": true, 00:13:39.990 "compare": false, 00:13:39.990 "compare_and_write": false, 00:13:39.990 "abort": false, 00:13:39.990 "nvme_admin": false, 00:13:39.990 "nvme_io": false 00:13:39.990 }, 00:13:39.990 "driver_specific": { 00:13:39.990 "lvol": { 00:13:39.990 "lvol_store_uuid": "47aaf202-b531-487f-a3ac-c5d398af2ed1", 00:13:39.990 "base_bdev": "nvme0n1", 00:13:39.990 "thin_provision": true, 00:13:39.990 "snapshot": false, 00:13:39.990 "clone": false, 00:13:39.990 "esnap_clone": false 00:13:39.990 } 00:13:39.990 } 00:13:39.990 } 00:13:39.990 ]' 00:13:39.990 04:09:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:39.990 04:09:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:39.990 04:09:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:39.990 04:09:41 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:39.990 04:09:41 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:39.990 04:09:41 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:39.990 04:09:41 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:13:39.990 04:09:41 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:13:39.990 04:09:41 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8cfb9051-bdfd-4846-bc2b-2293aa0c39cc -c nvc0n1p0 --l2p_dram_limit 60 00:13:40.250 [2024-11-26 04:09:41.770795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.770848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:13:40.250 [2024-11-26 04:09:41.770861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:13:40.250 [2024-11-26 04:09:41.770877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.770947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.770955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:13:40.250 [2024-11-26 04:09:41.770972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:13:40.250 [2024-11-26 04:09:41.770979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.771014] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:13:40.250 [2024-11-26 04:09:41.771882] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:13:40.250 [2024-11-26 04:09:41.771903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.771920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:13:40.250 [2024-11-26 04:09:41.771928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:13:40.250 [2024-11-26 04:09:41.771934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.771975] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7879509b-3b28-451f-923c-cfd952716e82 00:13:40.250 [2024-11-26 04:09:41.773090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.773121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:13:40.250 [2024-11-26 04:09:41.773129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:13:40.250 [2024-11-26 04:09:41.773139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.778423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.778450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:13:40.250 [2024-11-26 04:09:41.778458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:13:40.250 [2024-11-26 04:09:41.778468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.778576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.778587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:13:40.250 [2024-11-26 04:09:41.778593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:13:40.250 [2024-11-26 04:09:41.778609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.778670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.778680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:13:40.250 [2024-11-26 04:09:41.778686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:13:40.250 [2024-11-26 04:09:41.778692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.778725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:13:40.250 [2024-11-26 04:09:41.780048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.780072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:13:40.250 [2024-11-26 04:09:41.780080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:13:40.250 [2024-11-26 04:09:41.780095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.780133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.780139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:13:40.250 [2024-11-26 04:09:41.780148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:13:40.250 [2024-11-26 04:09:41.780154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.780176] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:13:40.250 [2024-11-26 04:09:41.780286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:13:40.250 [2024-11-26 04:09:41.780302] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:13:40.250 [2024-11-26 04:09:41.780309] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:13:40.250 [2024-11-26 04:09:41.780327] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:13:40.250 [2024-11-26 04:09:41.780334] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:13:40.250 [2024-11-26 04:09:41.780342] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:13:40.250 [2024-11-26 04:09:41.780347] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:13:40.250 [2024-11-26 04:09:41.780353] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:13:40.250 [2024-11-26 04:09:41.780366] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:13:40.250 [2024-11-26 04:09:41.780373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.780378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:13:40.250 [2024-11-26 04:09:41.780385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:13:40.250 [2024-11-26 04:09:41.780402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.780469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.250 [2024-11-26 04:09:41.780486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:13:40.250 [2024-11-26 04:09:41.780493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:13:40.250 [2024-11-26 04:09:41.780498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.250 [2024-11-26 04:09:41.780588] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:13:40.250 [2024-11-26 04:09:41.780595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:13:40.250 [2024-11-26 04:09:41.780602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:13:40.250 [2024-11-26 04:09:41.780608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:40.250 [2024-11-26 04:09:41.780617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:13:40.251 [2024-11-26 04:09:41.780622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:13:40.251 [2024-11-26 04:09:41.780640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:13:40.251 [2024-11-26 04:09:41.780651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:13:40.251 [2024-11-26 04:09:41.780656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:13:40.251 [2024-11-26 04:09:41.780663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:13:40.251 [2024-11-26 04:09:41.780668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:13:40.251 [2024-11-26 04:09:41.780675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:13:40.251 [2024-11-26 04:09:41.780680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:13:40.251 [2024-11-26 04:09:41.780690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:13:40.251 [2024-11-26 04:09:41.780698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:13:40.251 [2024-11-26 04:09:41.780712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:13:40.251 [2024-11-26 04:09:41.780729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:13:40.251 [2024-11-26 04:09:41.780742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:13:40.251 [2024-11-26 04:09:41.780765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:13:40.251 [2024-11-26 04:09:41.780788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:13:40.251 [2024-11-26 04:09:41.780808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:13:40.251 [2024-11-26 04:09:41.780827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:13:40.251 [2024-11-26 04:09:41.780840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:13:40.251 [2024-11-26 04:09:41.780847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:13:40.251 [2024-11-26 04:09:41.780852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:13:40.251 [2024-11-26 04:09:41.780859] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:13:40.251 [2024-11-26 04:09:41.780873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:13:40.251 [2024-11-26 04:09:41.780880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:40.251 [2024-11-26 04:09:41.780896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:13:40.251 [2024-11-26 04:09:41.780902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:13:40.251 [2024-11-26 04:09:41.780910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:13:40.251 [2024-11-26 04:09:41.780915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:13:40.251 [2024-11-26 04:09:41.780923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:13:40.251 [2024-11-26 04:09:41.780928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:13:40.251 [2024-11-26 04:09:41.780936] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:13:40.251 [2024-11-26 04:09:41.780944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:13:40.251 [2024-11-26 04:09:41.780953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:13:40.251 [2024-11-26 04:09:41.780959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:13:40.251 [2024-11-26 04:09:41.780967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:13:40.251 [2024-11-26 04:09:41.780973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:13:40.251 [2024-11-26 04:09:41.780981] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:13:40.251 [2024-11-26 04:09:41.780987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:13:40.251 [2024-11-26 04:09:41.780996] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:13:40.251 [2024-11-26 04:09:41.781002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:13:40.251 [2024-11-26 04:09:41.781011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:13:40.251 [2024-11-26 04:09:41.781017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:13:40.251 [2024-11-26 04:09:41.781024] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:13:40.251 [2024-11-26 04:09:41.781031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:13:40.251 [2024-11-26 04:09:41.781039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:13:40.251 [2024-11-26 04:09:41.781046] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:13:40.251 [2024-11-26 04:09:41.781054] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:13:40.251 [2024-11-26 04:09:41.781061] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:13:40.251 [2024-11-26 04:09:41.781068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:13:40.251 [2024-11-26 04:09:41.781075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:13:40.251 [2024-11-26 04:09:41.781083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:13:40.251 [2024-11-26 04:09:41.781089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.781097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:13:40.251 [2024-11-26 04:09:41.781111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:13:40.251 [2024-11-26 04:09:41.781118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.786812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.786843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:13:40.251 [2024-11-26 04:09:41.786851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:13:40.251 [2024-11-26 04:09:41.786858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.786933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.786941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:13:40.251 [2024-11-26 04:09:41.786947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:13:40.251 [2024-11-26 04:09:41.786955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.795433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.795460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:13:40.251 [2024-11-26 04:09:41.795468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.437 ms 00:13:40.251 [2024-11-26 04:09:41.795475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.795523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.795532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:13:40.251 [2024-11-26 04:09:41.795540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:13:40.251 [2024-11-26 04:09:41.795547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.795865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.795900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:13:40.251 [2024-11-26 04:09:41.795907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:13:40.251 [2024-11-26 04:09:41.795914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.796010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.796018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:13:40.251 [2024-11-26 04:09:41.796024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:13:40.251 [2024-11-26 04:09:41.796035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.812460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.251 [2024-11-26 04:09:41.812709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:13:40.251 [2024-11-26 04:09:41.812740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.395 ms 00:13:40.251 [2024-11-26 04:09:41.812758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.251 [2024-11-26 04:09:41.824059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:13:40.251 [2024-11-26 04:09:41.836906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.252 [2024-11-26 04:09:41.836951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:13:40.252 [2024-11-26 04:09:41.836963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.983 ms 00:13:40.252 [2024-11-26 04:09:41.836970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.252 [2024-11-26 04:09:41.880254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:40.252 [2024-11-26 04:09:41.880326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:13:40.252 [2024-11-26 04:09:41.880348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.226 ms 00:13:40.252 [2024-11-26 04:09:41.880359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:40.252 [2024-11-26 04:09:41.880456] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:13:40.252 [2024-11-26 04:09:41.880473] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:13:42.783 [2024-11-26 04:09:44.520273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:42.783 [2024-11-26 04:09:44.520336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:13:42.783 [2024-11-26 04:09:44.520353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2639.797 ms 00:13:42.783 [2024-11-26 04:09:44.520363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:42.783 [2024-11-26 04:09:44.520615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:42.783 [2024-11-26 04:09:44.520627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:13:42.783 [2024-11-26 04:09:44.520638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:13:42.783 [2024-11-26 04:09:44.520646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:42.783 [2024-11-26 04:09:44.523726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:42.783 [2024-11-26 04:09:44.523764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:13:42.783 [2024-11-26 04:09:44.523798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:13:42.783 [2024-11-26 04:09:44.523811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.526151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.526183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:13:43.042 [2024-11-26 04:09:44.526195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:13:43.042 [2024-11-26 04:09:44.526204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.526385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.526395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:13:43.042 [2024-11-26 04:09:44.526405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:13:43.042 [2024-11-26 04:09:44.526413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.546987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.547151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:13:43.042 [2024-11-26 04:09:44.547172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.523 ms 00:13:43.042 [2024-11-26 04:09:44.547180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.550900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.550935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:13:43.042 [2024-11-26 04:09:44.550950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:13:43.042 [2024-11-26 04:09:44.550957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.555207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.555239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:13:43.042 [2024-11-26 04:09:44.555252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:13:43.042 [2024-11-26 04:09:44.555260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.558381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.558522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:13:43.042 [2024-11-26 04:09:44.558541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:13:43.042 [2024-11-26 04:09:44.558548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.558603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.558612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:13:43.042 [2024-11-26 04:09:44.558621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:13:43.042 [2024-11-26 04:09:44.558628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.558718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.042 [2024-11-26 04:09:44.558727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:13:43.042 [2024-11-26 04:09:44.558738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:13:43.042 [2024-11-26 04:09:44.558746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.042 [2024-11-26 04:09:44.559679] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2788.460 ms, result 0 00:13:43.042 { 00:13:43.042 "name": "ftl0", 00:13:43.042 "uuid": "7879509b-3b28-451f-923c-cfd952716e82" 00:13:43.042 } 00:13:43.042 04:09:44 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:13:43.042 04:09:44 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:13:43.042 04:09:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.042 04:09:44 -- common/autotest_common.sh@899 -- # local i 00:13:43.042 04:09:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.042 04:09:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.042 04:09:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:43.042 04:09:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:13:43.301 [ 00:13:43.301 { 00:13:43.301 "name": "ftl0", 00:13:43.301 "aliases": [ 00:13:43.301 "7879509b-3b28-451f-923c-cfd952716e82" 00:13:43.301 ], 00:13:43.301 "product_name": "FTL disk", 00:13:43.301 "block_size": 4096, 00:13:43.301 "num_blocks": 20971520, 00:13:43.301 "uuid": "7879509b-3b28-451f-923c-cfd952716e82", 00:13:43.301 "assigned_rate_limits": { 00:13:43.301 "rw_ios_per_sec": 0, 00:13:43.301 "rw_mbytes_per_sec": 0, 00:13:43.301 "r_mbytes_per_sec": 0, 00:13:43.301 "w_mbytes_per_sec": 0 00:13:43.301 }, 00:13:43.301 "claimed": false, 00:13:43.301 "zoned": false, 00:13:43.301 "supported_io_types": { 00:13:43.301 "read": true, 00:13:43.301 "write": true, 00:13:43.301 "unmap": true, 00:13:43.301 "write_zeroes": true, 00:13:43.301 "flush": true, 00:13:43.301 "reset": false, 00:13:43.301 "compare": false, 00:13:43.301 "compare_and_write": false, 00:13:43.301 "abort": false, 00:13:43.301 "nvme_admin": false, 00:13:43.301 "nvme_io": false 00:13:43.301 }, 00:13:43.301 "driver_specific": { 00:13:43.301 "ftl": { 00:13:43.301 "base_bdev": "8cfb9051-bdfd-4846-bc2b-2293aa0c39cc", 00:13:43.301 "cache": "nvc0n1p0" 00:13:43.301 } 00:13:43.301 } 00:13:43.301 } 00:13:43.301 ] 00:13:43.301 04:09:44 -- common/autotest_common.sh@905 -- # return 0 00:13:43.301 04:09:44 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:13:43.301 04:09:44 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:13:43.301 04:09:45 -- ftl/fio.sh@70 -- # echo ']}' 00:13:43.301 04:09:45 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:13:43.562 [2024-11-26 04:09:45.169818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.170052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:13:43.562 [2024-11-26 04:09:45.170072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:13:43.562 [2024-11-26 04:09:45.170085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.170127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:13:43.562 [2024-11-26 04:09:45.170621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.170637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:13:43.562 [2024-11-26 04:09:45.170648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:13:43.562 [2024-11-26 04:09:45.170655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.171257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.171274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:13:43.562 [2024-11-26 04:09:45.171286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:13:43.562 [2024-11-26 04:09:45.171294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.174585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.174608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:13:43.562 [2024-11-26 04:09:45.174619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:13:43.562 [2024-11-26 04:09:45.174627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.180866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.180992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:13:43.562 [2024-11-26 04:09:45.181011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:13:43.562 [2024-11-26 04:09:45.181021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.182627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.182658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:13:43.562 [2024-11-26 04:09:45.182682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:13:43.562 [2024-11-26 04:09:45.182689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.186271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.186304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:13:43.562 [2024-11-26 04:09:45.186316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.536 ms 00:13:43.562 [2024-11-26 04:09:45.186324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.186539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.186549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:13:43.562 [2024-11-26 04:09:45.186562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:13:43.562 [2024-11-26 04:09:45.186568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.187944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.188052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:13:43.562 [2024-11-26 04:09:45.188068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:13:43.562 [2024-11-26 04:09:45.188075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.189109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.189134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:13:43.562 [2024-11-26 04:09:45.189144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:13:43.562 [2024-11-26 04:09:45.189151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.190017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.190046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:13:43.562 [2024-11-26 04:09:45.190056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:13:43.562 [2024-11-26 04:09:45.190063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.190971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.562 [2024-11-26 04:09:45.190999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:13:43.562 [2024-11-26 04:09:45.191010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:13:43.562 [2024-11-26 04:09:45.191017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.562 [2024-11-26 04:09:45.191055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:13:43.562 [2024-11-26 04:09:45.191070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:13:43.562 [2024-11-26 04:09:45.191444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.191995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:13:43.563 [2024-11-26 04:09:45.192701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:13:43.563 [2024-11-26 04:09:45.192712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7879509b-3b28-451f-923c-cfd952716e82 00:13:43.563 [2024-11-26 04:09:45.192720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:13:43.563 [2024-11-26 04:09:45.192728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:13:43.563 [2024-11-26 04:09:45.192735] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:13:43.563 [2024-11-26 04:09:45.192745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:13:43.563 [2024-11-26 04:09:45.192754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:13:43.563 [2024-11-26 04:09:45.192764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:13:43.563 [2024-11-26 04:09:45.192771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:13:43.563 [2024-11-26 04:09:45.192779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:13:43.563 [2024-11-26 04:09:45.192785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:13:43.563 [2024-11-26 04:09:45.192794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.563 [2024-11-26 04:09:45.192801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:13:43.563 [2024-11-26 04:09:45.192810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:13:43.563 [2024-11-26 04:09:45.192818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.194322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.563 [2024-11-26 04:09:45.194338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:13:43.563 [2024-11-26 04:09:45.194350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:13:43.563 [2024-11-26 04:09:45.194357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.194437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:43.563 [2024-11-26 04:09:45.194445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:13:43.563 [2024-11-26 04:09:45.194454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:13:43.563 [2024-11-26 04:09:45.194461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.199921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.563 [2024-11-26 04:09:45.199957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:13:43.563 [2024-11-26 04:09:45.199971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.563 [2024-11-26 04:09:45.199978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.200059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.563 [2024-11-26 04:09:45.200067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:13:43.563 [2024-11-26 04:09:45.200076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.563 [2024-11-26 04:09:45.200082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.200158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.563 [2024-11-26 04:09:45.200167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:13:43.563 [2024-11-26 04:09:45.200177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.563 [2024-11-26 04:09:45.200186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.563 [2024-11-26 04:09:45.200225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.563 [2024-11-26 04:09:45.200232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:13:43.563 [2024-11-26 04:09:45.200241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.200248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.210010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.210053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:13:43.564 [2024-11-26 04:09:45.210067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.210074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.213796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.213829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:13:43.564 [2024-11-26 04:09:45.213841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.213848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.213931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.213940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:13:43.564 [2024-11-26 04:09:45.213969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.213976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.214064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:13:43.564 [2024-11-26 04:09:45.214072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.214079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.214173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:13:43.564 [2024-11-26 04:09:45.214183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.214190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.214259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:13:43.564 [2024-11-26 04:09:45.214269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.214275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.214342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:13:43.564 [2024-11-26 04:09:45.214351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.214358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:43.564 [2024-11-26 04:09:45.214453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:13:43.564 [2024-11-26 04:09:45.214463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:43.564 [2024-11-26 04:09:45.214471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:43.564 [2024-11-26 04:09:45.214688] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.829 ms, result 0 00:13:43.564 true 00:13:43.564 04:09:45 -- ftl/fio.sh@75 -- # killprocess 81988 00:13:43.564 04:09:45 -- common/autotest_common.sh@936 -- # '[' -z 81988 ']' 00:13:43.564 04:09:45 -- common/autotest_common.sh@940 -- # kill -0 81988 00:13:43.564 04:09:45 -- common/autotest_common.sh@941 -- # uname 00:13:43.564 04:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.564 04:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81988 00:13:43.564 killing process with pid 81988 00:13:43.564 04:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.564 04:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.564 04:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81988' 00:13:43.564 04:09:45 -- common/autotest_common.sh@955 -- # kill 81988 00:13:43.564 04:09:45 -- common/autotest_common.sh@960 -- # wait 81988 00:13:46.849 04:09:48 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:13:46.849 04:09:48 -- ftl/fio.sh@78 -- # for test in ${tests} 00:13:46.849 04:09:48 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:13:46.849 04:09:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.849 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.849 04:09:48 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:46.849 04:09:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:46.849 04:09:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:46.849 04:09:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:46.849 04:09:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:46.849 04:09:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.849 04:09:48 -- common/autotest_common.sh@1330 -- # shift 00:13:46.849 04:09:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:46.849 04:09:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:46.849 04:09:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.849 04:09:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:46.849 04:09:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:46.849 04:09:48 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.849 04:09:48 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.849 04:09:48 -- common/autotest_common.sh@1336 -- # break 00:13:46.849 04:09:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:46.849 04:09:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:47.108 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:13:47.108 fio-3.35 00:13:47.108 Starting 1 thread 00:13:50.392 00:13:50.392 test: (groupid=0, jobs=1): err= 0: pid=82158: Tue Nov 26 04:09:52 2024 00:13:50.392 read: IOPS=1367, BW=90.8MiB/s (95.3MB/s)(255MiB/2802msec) 00:13:50.392 slat (nsec): min=2954, max=18965, avg=4368.78, stdev=1750.72 00:13:50.392 clat (usec): min=231, max=868, avg=327.24, stdev=63.81 00:13:50.392 lat (usec): min=235, max=883, avg=331.60, stdev=64.31 00:13:50.392 clat percentiles (usec): 00:13:50.392 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 306], 00:13:50.392 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 314], 00:13:50.392 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 404], 95.00th=[ 449], 00:13:50.392 | 99.00th=[ 627], 99.50th=[ 709], 99.90th=[ 840], 99.95th=[ 865], 00:13:50.392 | 99.99th=[ 873] 00:13:50.392 write: IOPS=1378, BW=91.5MiB/s (96.0MB/s)(256MiB/2798msec); 0 zone resets 00:13:50.392 slat (nsec): min=13446, max=94583, avg=19278.91, stdev=3806.02 00:13:50.392 clat (usec): min=252, max=1010, avg=363.89, stdev=91.59 00:13:50.392 lat (usec): min=271, max=1030, avg=383.17, stdev=92.15 00:13:50.392 clat percentiles (usec): 00:13:50.392 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 330], 00:13:50.392 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 334], 60.00th=[ 338], 00:13:50.392 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 420], 95.00th=[ 578], 00:13:50.392 | 99.00th=[ 783], 99.50th=[ 865], 99.90th=[ 938], 99.95th=[ 979], 00:13:50.392 | 99.99th=[ 1012] 00:13:50.392 bw ( KiB/s): min=89760, max=96424, per=100.00%, avg=94030.40, stdev=2643.44, samples=5 00:13:50.392 iops : min= 1320, max= 1418, avg=1382.80, stdev=38.87, samples=5 00:13:50.392 lat (usec) : 250=0.13%, 500=95.02%, 750=4.02%, 1000=0.82% 00:13:50.392 lat (msec) : 2=0.01% 00:13:50.392 cpu : usr=99.39%, sys=0.11%, ctx=8, majf=0, minf=1329 00:13:50.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.392 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.392 00:13:50.392 Run status group 0 (all jobs): 00:13:50.392 READ: bw=90.8MiB/s (95.3MB/s), 90.8MiB/s-90.8MiB/s (95.3MB/s-95.3MB/s), io=255MiB (267MB), run=2802-2802msec 00:13:50.392 WRITE: bw=91.5MiB/s (96.0MB/s), 91.5MiB/s-91.5MiB/s (96.0MB/s-96.0MB/s), io=256MiB (269MB), run=2798-2798msec 00:13:50.958 ----------------------------------------------------- 00:13:50.958 Suppressions used: 00:13:50.958 count bytes template 00:13:50.958 1 5 /usr/src/fio/parse.c 00:13:50.958 1 8 libtcmalloc_minimal.so 00:13:50.958 1 904 libcrypto.so 00:13:50.958 ----------------------------------------------------- 00:13:50.958 00:13:50.958 04:09:52 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:13:50.958 04:09:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.958 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.216 04:09:52 -- ftl/fio.sh@78 -- # for test in ${tests} 00:13:51.216 04:09:52 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:13:51.216 04:09:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.216 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:51.216 04:09:52 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:13:51.216 04:09:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:13:51.216 04:09:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:51.216 04:09:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.216 04:09:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:51.216 04:09:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.216 04:09:52 -- common/autotest_common.sh@1330 -- # shift 00:13:51.216 04:09:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:51.216 04:09:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.216 04:09:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.216 04:09:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:51.216 04:09:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:51.216 04:09:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.216 04:09:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.216 04:09:52 -- common/autotest_common.sh@1336 -- # break 00:13:51.216 04:09:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.216 04:09:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:13:51.216 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:13:51.217 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:13:51.217 fio-3.35 00:13:51.217 Starting 2 threads 00:14:13.157 00:14:13.157 first_half: (groupid=0, jobs=1): err= 0: pid=82234: Tue Nov 26 04:10:14 2024 00:14:13.157 read: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(256MiB/20992msec) 00:14:13.157 slat (nsec): min=2895, max=51375, avg=5356.53, stdev=1019.68 00:14:13.157 clat (usec): min=499, max=246777, avg=34667.69, stdev=20509.43 00:14:13.157 lat (usec): min=503, max=246782, avg=34673.04, stdev=20509.58 00:14:13.157 clat percentiles (msec): 00:14:13.157 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:14:13.157 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:14:13.157 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 66], 00:14:13.157 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 184], 99.95th=[ 215], 00:14:13.157 | 99.99th=[ 241] 00:14:13.157 write: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(256MiB/20960msec); 0 zone resets 00:14:13.157 slat (usec): min=3, max=246, avg= 6.28, stdev= 2.99 00:14:13.157 clat (usec): min=211, max=45433, avg=6337.76, stdev=6854.66 00:14:13.157 lat (usec): min=232, max=45438, avg=6344.04, stdev=6854.73 00:14:13.157 clat percentiles (usec): 00:14:13.157 | 1.00th=[ 701], 5.00th=[ 824], 10.00th=[ 1074], 20.00th=[ 2343], 00:14:13.157 | 30.00th=[ 3097], 40.00th=[ 3818], 50.00th=[ 4555], 60.00th=[ 5145], 00:14:13.157 | 70.00th=[ 5669], 80.00th=[ 7046], 90.00th=[12387], 95.00th=[25822], 00:14:13.157 | 99.00th=[32900], 99.50th=[33817], 99.90th=[37487], 99.95th=[39060], 00:14:13.157 | 99.99th=[45351] 00:14:13.158 bw ( KiB/s): min= 4744, max=55760, per=94.64%, avg=23673.82, stdev=14740.08, samples=22 00:14:13.158 iops : min= 1186, max=13940, avg=5918.45, stdev=3685.02, samples=22 00:14:13.158 lat (usec) : 250=0.01%, 500=0.02%, 750=1.09%, 1000=3.43% 00:14:13.158 lat (msec) : 2=4.27%, 4=12.30%, 10=22.52%, 20=4.55%, 50=48.71% 00:14:13.158 lat (msec) : 100=1.50%, 250=1.62% 00:14:13.158 cpu : usr=99.44%, sys=0.12%, ctx=46, majf=0, minf=5557 00:14:13.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:13.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.158 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.158 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.158 second_half: (groupid=0, jobs=1): err= 0: pid=82235: Tue Nov 26 04:10:14 2024 00:14:13.158 read: IOPS=3145, BW=12.3MiB/s (12.9MB/s)(256MiB/20820msec) 00:14:13.158 slat (nsec): min=2996, max=20339, avg=4264.78, stdev=826.55 00:14:13.158 clat (msec): min=9, max=168, avg=35.10, stdev=18.39 00:14:13.158 lat (msec): min=9, max=168, avg=35.11, stdev=18.39 00:14:13.158 clat percentiles (msec): 00:14:13.158 | 1.00th=[ 25], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:14:13.158 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 31], 00:14:13.158 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 41], 95.00th=[ 63], 00:14:13.158 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 163], 00:14:13.158 | 99.99th=[ 165] 00:14:13.158 write: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(256MiB/20707msec); 0 zone resets 00:14:13.158 slat (usec): min=3, max=439, avg= 5.39, stdev= 3.14 00:14:13.158 clat (usec): min=334, max=37827, avg=5569.61, stdev=4718.29 00:14:13.158 lat (usec): min=342, max=37839, avg=5575.00, stdev=4718.75 00:14:13.158 clat percentiles (usec): 00:14:13.158 | 1.00th=[ 750], 5.00th=[ 1401], 10.00th=[ 2180], 20.00th=[ 2868], 00:14:13.158 | 30.00th=[ 3425], 40.00th=[ 3982], 50.00th=[ 4555], 60.00th=[ 5080], 00:14:13.158 | 70.00th=[ 5342], 80.00th=[ 6128], 90.00th=[10421], 95.00th=[12256], 00:14:13.158 | 99.00th=[30278], 99.50th=[33424], 99.90th=[34341], 99.95th=[34866], 00:14:13.158 | 99.99th=[37487] 00:14:13.158 bw ( KiB/s): min= 3856, max=47616, per=100.00%, avg=27403.79, stdev=13971.24, samples=19 00:14:13.158 iops : min= 964, max=11904, avg=6850.95, stdev=3492.81, samples=19 00:14:13.158 lat (usec) : 500=0.04%, 750=0.45%, 1000=0.86% 00:14:13.158 lat (msec) : 2=2.85%, 4=16.03%, 10=24.01%, 20=4.66%, 50=47.93% 00:14:13.158 lat (msec) : 100=1.77%, 250=1.39% 00:14:13.158 cpu : usr=99.45%, sys=0.16%, ctx=32, majf=0, minf=5587 00:14:13.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:13.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.158 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:13.158 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:13.158 00:14:13.158 Run status group 0 (all jobs): 00:14:13.158 READ: bw=24.4MiB/s (25.6MB/s), 12.2MiB/s-12.3MiB/s (12.8MB/s-12.9MB/s), io=512MiB (536MB), run=20820-20992msec 00:14:13.158 WRITE: bw=24.4MiB/s (25.6MB/s), 12.2MiB/s-12.4MiB/s (12.8MB/s-13.0MB/s), io=512MiB (537MB), run=20707-20960msec 00:14:14.093 ----------------------------------------------------- 00:14:14.093 Suppressions used: 00:14:14.093 count bytes template 00:14:14.093 2 10 /usr/src/fio/parse.c 00:14:14.093 3 288 /usr/src/fio/iolog.c 00:14:14.093 1 8 libtcmalloc_minimal.so 00:14:14.093 1 904 libcrypto.so 00:14:14.093 ----------------------------------------------------- 00:14:14.093 00:14:14.093 04:10:15 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:14:14.093 04:10:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.093 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 04:10:15 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:14.093 04:10:15 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:14:14.093 04:10:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.093 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:14:14.093 04:10:15 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:14.093 04:10:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:14.093 04:10:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:14.093 04:10:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:14.093 04:10:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:14.093 04:10:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.093 04:10:15 -- common/autotest_common.sh@1330 -- # shift 00:14:14.093 04:10:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:14.093 04:10:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:14.093 04:10:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:14.093 04:10:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:14.093 04:10:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:14.093 04:10:15 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:14.093 04:10:15 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:14.093 04:10:15 -- common/autotest_common.sh@1336 -- # break 00:14:14.093 04:10:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:14.093 04:10:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:14.093 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:14.093 fio-3.35 00:14:14.093 Starting 1 thread 00:14:26.295 00:14:26.295 test: (groupid=0, jobs=1): err= 0: pid=82509: Tue Nov 26 04:10:27 2024 00:14:26.295 read: IOPS=8529, BW=33.3MiB/s (34.9MB/s)(255MiB/7644msec) 00:14:26.295 slat (nsec): min=3011, max=22844, avg=3511.80, stdev=503.16 00:14:26.295 clat (usec): min=475, max=29244, avg=14999.21, stdev=1558.09 00:14:26.295 lat (usec): min=479, max=29248, avg=15002.72, stdev=1558.10 00:14:26.295 clat percentiles (usec): 00:14:26.295 | 1.00th=[13960], 5.00th=[14091], 10.00th=[14222], 20.00th=[14353], 00:14:26.295 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:14:26.295 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15533], 95.00th=[17957], 00:14:26.295 | 99.00th=[22414], 99.50th=[23200], 99.90th=[25560], 99.95th=[27395], 00:14:26.295 | 99.99th=[28967] 00:14:26.295 write: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(256MiB/3649msec); 0 zone resets 00:14:26.295 slat (usec): min=3, max=148, avg= 5.93, stdev= 2.11 00:14:26.295 clat (usec): min=386, max=44869, avg=7084.88, stdev=9033.08 00:14:26.295 lat (usec): min=392, max=44874, avg=7090.81, stdev=9033.02 00:14:26.295 clat percentiles (usec): 00:14:26.295 | 1.00th=[ 586], 5.00th=[ 701], 10.00th=[ 799], 20.00th=[ 914], 00:14:26.295 | 30.00th=[ 1020], 40.00th=[ 1270], 50.00th=[ 4293], 60.00th=[ 5080], 00:14:26.295 | 70.00th=[ 6194], 80.00th=[ 7767], 90.00th=[26608], 95.00th=[27919], 00:14:26.295 | 99.00th=[29492], 99.50th=[31327], 99.90th=[36963], 99.95th=[38011], 00:14:26.295 | 99.99th=[43779] 00:14:26.295 bw ( KiB/s): min=17696, max=98552, per=91.23%, avg=65536.00, stdev=23937.50, samples=8 00:14:26.295 iops : min= 4424, max=24638, avg=16384.00, stdev=5984.37, samples=8 00:14:26.295 lat (usec) : 500=0.08%, 750=3.51%, 1000=10.76% 00:14:26.295 lat (msec) : 2=6.37%, 4=2.55%, 10=18.70%, 20=48.84%, 50=9.18% 00:14:26.295 cpu : usr=99.41%, sys=0.15%, ctx=20, majf=0, minf=5577 00:14:26.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:26.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.295 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:26.295 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:26.295 00:14:26.295 Run status group 0 (all jobs): 00:14:26.295 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=255MiB (267MB), run=7644-7644msec 00:14:26.295 WRITE: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=256MiB (268MB), run=3649-3649msec 00:14:26.861 ----------------------------------------------------- 00:14:26.861 Suppressions used: 00:14:26.861 count bytes template 00:14:26.861 1 5 /usr/src/fio/parse.c 00:14:26.861 2 192 /usr/src/fio/iolog.c 00:14:26.861 1 8 libtcmalloc_minimal.so 00:14:26.861 1 904 libcrypto.so 00:14:26.861 ----------------------------------------------------- 00:14:26.861 00:14:26.861 04:10:28 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:14:26.861 04:10:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.861 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:26.861 04:10:28 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:27.120 Remove shared memory files 00:14:27.120 04:10:28 -- ftl/fio.sh@85 -- # remove_shm 00:14:27.120 04:10:28 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:14:27.120 04:10:28 -- ftl/common.sh@205 -- # rm -f rm -f 00:14:27.120 04:10:28 -- ftl/common.sh@206 -- # rm -f rm -f 00:14:27.120 04:10:28 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68557 /dev/shm/spdk_tgt_trace.pid80928 00:14:27.120 04:10:28 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:14:27.120 04:10:28 -- ftl/common.sh@209 -- # rm -f rm -f 00:14:27.120 00:14:27.120 real 0m50.271s 00:14:27.120 user 1m52.848s 00:14:27.120 sys 0m2.399s 00:14:27.120 04:10:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:27.120 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.120 ************************************ 00:14:27.120 END TEST ftl_fio_basic 00:14:27.120 ************************************ 00:14:27.120 04:10:28 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:27.120 04:10:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:27.120 04:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.120 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.120 ************************************ 00:14:27.120 START TEST ftl_bdevperf 00:14:27.120 ************************************ 00:14:27.120 04:10:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:27.120 * Looking for test storage... 00:14:27.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:27.120 04:10:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:27.120 04:10:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:27.120 04:10:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:27.120 04:10:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:27.120 04:10:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:27.120 04:10:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:27.120 04:10:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:27.120 04:10:28 -- scripts/common.sh@335 -- # IFS=.-: 00:14:27.120 04:10:28 -- scripts/common.sh@335 -- # read -ra ver1 00:14:27.120 04:10:28 -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.120 04:10:28 -- scripts/common.sh@336 -- # read -ra ver2 00:14:27.120 04:10:28 -- scripts/common.sh@337 -- # local 'op=<' 00:14:27.120 04:10:28 -- scripts/common.sh@339 -- # ver1_l=2 00:14:27.120 04:10:28 -- scripts/common.sh@340 -- # ver2_l=1 00:14:27.120 04:10:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:27.120 04:10:28 -- scripts/common.sh@343 -- # case "$op" in 00:14:27.120 04:10:28 -- scripts/common.sh@344 -- # : 1 00:14:27.120 04:10:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:27.120 04:10:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.120 04:10:28 -- scripts/common.sh@364 -- # decimal 1 00:14:27.120 04:10:28 -- scripts/common.sh@352 -- # local d=1 00:14:27.120 04:10:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.120 04:10:28 -- scripts/common.sh@354 -- # echo 1 00:14:27.120 04:10:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:27.120 04:10:28 -- scripts/common.sh@365 -- # decimal 2 00:14:27.120 04:10:28 -- scripts/common.sh@352 -- # local d=2 00:14:27.120 04:10:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.120 04:10:28 -- scripts/common.sh@354 -- # echo 2 00:14:27.120 04:10:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:27.120 04:10:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:27.120 04:10:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:27.120 04:10:28 -- scripts/common.sh@367 -- # return 0 00:14:27.120 04:10:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.120 04:10:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.120 --rc genhtml_branch_coverage=1 00:14:27.120 --rc genhtml_function_coverage=1 00:14:27.120 --rc genhtml_legend=1 00:14:27.120 --rc geninfo_all_blocks=1 00:14:27.120 --rc geninfo_unexecuted_blocks=1 00:14:27.120 00:14:27.120 ' 00:14:27.120 04:10:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.120 --rc genhtml_branch_coverage=1 00:14:27.120 --rc genhtml_function_coverage=1 00:14:27.120 --rc genhtml_legend=1 00:14:27.120 --rc geninfo_all_blocks=1 00:14:27.120 --rc geninfo_unexecuted_blocks=1 00:14:27.120 00:14:27.120 ' 00:14:27.120 04:10:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:27.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.121 --rc genhtml_branch_coverage=1 00:14:27.121 --rc genhtml_function_coverage=1 00:14:27.121 --rc genhtml_legend=1 00:14:27.121 --rc geninfo_all_blocks=1 00:14:27.121 --rc geninfo_unexecuted_blocks=1 00:14:27.121 00:14:27.121 ' 00:14:27.121 04:10:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:27.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.121 --rc genhtml_branch_coverage=1 00:14:27.121 --rc genhtml_function_coverage=1 00:14:27.121 --rc genhtml_legend=1 00:14:27.121 --rc geninfo_all_blocks=1 00:14:27.121 --rc geninfo_unexecuted_blocks=1 00:14:27.121 00:14:27.121 ' 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:27.121 04:10:28 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:14:27.121 04:10:28 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:27.121 04:10:28 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:27.121 04:10:28 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:27.121 04:10:28 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:27.121 04:10:28 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.121 04:10:28 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:27.121 04:10:28 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:27.121 04:10:28 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.121 04:10:28 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.121 04:10:28 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:27.121 04:10:28 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:27.121 04:10:28 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:27.121 04:10:28 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:27.121 04:10:28 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:27.121 04:10:28 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:27.121 04:10:28 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.121 04:10:28 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.121 04:10:28 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:27.121 04:10:28 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:27.121 04:10:28 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:27.121 04:10:28 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:27.121 04:10:28 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:27.121 04:10:28 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:27.121 04:10:28 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:27.121 04:10:28 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:27.121 04:10:28 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.121 04:10:28 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@13 -- # use_append= 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@15 -- # timeout=240 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:14:27.121 04:10:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.121 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=82720 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@22 -- # waitforlisten 82720 00:14:27.121 04:10:28 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:14:27.121 04:10:28 -- common/autotest_common.sh@829 -- # '[' -z 82720 ']' 00:14:27.121 04:10:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.121 04:10:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.121 04:10:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.121 04:10:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.121 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:14:27.379 [2024-11-26 04:10:28.894478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:27.379 [2024-11-26 04:10:28.894775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82720 ] 00:14:27.379 [2024-11-26 04:10:29.042148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.379 [2024-11-26 04:10:29.070720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.311 04:10:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.311 04:10:29 -- common/autotest_common.sh@862 -- # return 0 00:14:28.311 04:10:29 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:28.311 04:10:29 -- ftl/common.sh@54 -- # local name=nvme0 00:14:28.311 04:10:29 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:28.311 04:10:29 -- ftl/common.sh@56 -- # local size=103424 00:14:28.311 04:10:29 -- ftl/common.sh@59 -- # local base_bdev 00:14:28.311 04:10:29 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:28.311 04:10:29 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:28.311 04:10:29 -- ftl/common.sh@62 -- # local base_size 00:14:28.311 04:10:29 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:28.311 04:10:29 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:28.311 04:10:29 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:28.311 04:10:29 -- common/autotest_common.sh@1369 -- # local bs 00:14:28.311 04:10:29 -- common/autotest_common.sh@1370 -- # local nb 00:14:28.311 04:10:29 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:28.569 04:10:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:28.569 { 00:14:28.569 "name": "nvme0n1", 00:14:28.569 "aliases": [ 00:14:28.569 "a52a42f2-572b-45a7-bb70-159ed87a2494" 00:14:28.569 ], 00:14:28.569 "product_name": "NVMe disk", 00:14:28.569 "block_size": 4096, 00:14:28.569 "num_blocks": 1310720, 00:14:28.569 "uuid": "a52a42f2-572b-45a7-bb70-159ed87a2494", 00:14:28.569 "assigned_rate_limits": { 00:14:28.569 "rw_ios_per_sec": 0, 00:14:28.569 "rw_mbytes_per_sec": 0, 00:14:28.569 "r_mbytes_per_sec": 0, 00:14:28.569 "w_mbytes_per_sec": 0 00:14:28.569 }, 00:14:28.569 "claimed": true, 00:14:28.569 "claim_type": "read_many_write_one", 00:14:28.569 "zoned": false, 00:14:28.569 "supported_io_types": { 00:14:28.569 "read": true, 00:14:28.569 "write": true, 00:14:28.569 "unmap": true, 00:14:28.569 "write_zeroes": true, 00:14:28.569 "flush": true, 00:14:28.569 "reset": true, 00:14:28.569 "compare": true, 00:14:28.569 "compare_and_write": false, 00:14:28.569 "abort": true, 00:14:28.569 "nvme_admin": true, 00:14:28.569 "nvme_io": true 00:14:28.569 }, 00:14:28.569 "driver_specific": { 00:14:28.569 "nvme": [ 00:14:28.569 { 00:14:28.569 "pci_address": "0000:00:07.0", 00:14:28.569 "trid": { 00:14:28.569 "trtype": "PCIe", 00:14:28.569 "traddr": "0000:00:07.0" 00:14:28.569 }, 00:14:28.569 "ctrlr_data": { 00:14:28.569 "cntlid": 0, 00:14:28.569 "vendor_id": "0x1b36", 00:14:28.569 "model_number": "QEMU NVMe Ctrl", 00:14:28.569 "serial_number": "12341", 00:14:28.569 "firmware_revision": "8.0.0", 00:14:28.569 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:28.569 "oacs": { 00:14:28.569 "security": 0, 00:14:28.569 "format": 1, 00:14:28.569 "firmware": 0, 00:14:28.569 "ns_manage": 1 00:14:28.569 }, 00:14:28.569 "multi_ctrlr": false, 00:14:28.569 "ana_reporting": false 00:14:28.569 }, 00:14:28.569 "vs": { 00:14:28.569 "nvme_version": "1.4" 00:14:28.569 }, 00:14:28.569 "ns_data": { 00:14:28.569 "id": 1, 00:14:28.569 "can_share": false 00:14:28.569 } 00:14:28.569 } 00:14:28.569 ], 00:14:28.569 "mp_policy": "active_passive" 00:14:28.569 } 00:14:28.569 } 00:14:28.569 ]' 00:14:28.569 04:10:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:28.569 04:10:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:28.569 04:10:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:28.569 04:10:30 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:28.569 04:10:30 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:28.569 04:10:30 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:28.569 04:10:30 -- ftl/common.sh@63 -- # base_size=5120 00:14:28.569 04:10:30 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:28.569 04:10:30 -- ftl/common.sh@67 -- # clear_lvols 00:14:28.569 04:10:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:28.569 04:10:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:28.828 04:10:30 -- ftl/common.sh@28 -- # stores=47aaf202-b531-487f-a3ac-c5d398af2ed1 00:14:28.828 04:10:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:14:28.828 04:10:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47aaf202-b531-487f-a3ac-c5d398af2ed1 00:14:28.828 04:10:30 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:29.087 04:10:30 -- ftl/common.sh@68 -- # lvs=2c4ca93e-b469-4aa7-8d7d-104027f3e774 00:14:29.087 04:10:30 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2c4ca93e-b469-4aa7-8d7d-104027f3e774 00:14:29.346 04:10:30 -- ftl/bdevperf.sh@23 -- # split_bdev=328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.346 04:10:30 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.346 04:10:30 -- ftl/common.sh@35 -- # local name=nvc0 00:14:29.346 04:10:30 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:29.346 04:10:30 -- ftl/common.sh@37 -- # local base_bdev=328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.346 04:10:30 -- ftl/common.sh@38 -- # local cache_size= 00:14:29.346 04:10:30 -- ftl/common.sh@41 -- # get_bdev_size 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.346 04:10:30 -- common/autotest_common.sh@1367 -- # local bdev_name=328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.346 04:10:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:29.346 04:10:30 -- common/autotest_common.sh@1369 -- # local bs 00:14:29.346 04:10:30 -- common/autotest_common.sh@1370 -- # local nb 00:14:29.346 04:10:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.605 04:10:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:29.605 { 00:14:29.605 "name": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:29.605 "aliases": [ 00:14:29.605 "lvs/nvme0n1p0" 00:14:29.605 ], 00:14:29.605 "product_name": "Logical Volume", 00:14:29.605 "block_size": 4096, 00:14:29.605 "num_blocks": 26476544, 00:14:29.605 "uuid": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:29.605 "assigned_rate_limits": { 00:14:29.605 "rw_ios_per_sec": 0, 00:14:29.605 "rw_mbytes_per_sec": 0, 00:14:29.605 "r_mbytes_per_sec": 0, 00:14:29.605 "w_mbytes_per_sec": 0 00:14:29.605 }, 00:14:29.605 "claimed": false, 00:14:29.605 "zoned": false, 00:14:29.605 "supported_io_types": { 00:14:29.605 "read": true, 00:14:29.605 "write": true, 00:14:29.605 "unmap": true, 00:14:29.605 "write_zeroes": true, 00:14:29.605 "flush": false, 00:14:29.605 "reset": true, 00:14:29.605 "compare": false, 00:14:29.605 "compare_and_write": false, 00:14:29.605 "abort": false, 00:14:29.605 "nvme_admin": false, 00:14:29.605 "nvme_io": false 00:14:29.605 }, 00:14:29.605 "driver_specific": { 00:14:29.605 "lvol": { 00:14:29.605 "lvol_store_uuid": "2c4ca93e-b469-4aa7-8d7d-104027f3e774", 00:14:29.605 "base_bdev": "nvme0n1", 00:14:29.605 "thin_provision": true, 00:14:29.605 "snapshot": false, 00:14:29.605 "clone": false, 00:14:29.605 "esnap_clone": false 00:14:29.605 } 00:14:29.605 } 00:14:29.605 } 00:14:29.605 ]' 00:14:29.605 04:10:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:29.605 04:10:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:29.605 04:10:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:29.605 04:10:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:29.605 04:10:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:29.605 04:10:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:29.605 04:10:31 -- ftl/common.sh@41 -- # local base_size=5171 00:14:29.605 04:10:31 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:29.605 04:10:31 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:29.864 04:10:31 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:29.864 04:10:31 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:29.864 04:10:31 -- ftl/common.sh@48 -- # get_bdev_size 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.864 04:10:31 -- common/autotest_common.sh@1367 -- # local bdev_name=328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:29.864 04:10:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:29.864 04:10:31 -- common/autotest_common.sh@1369 -- # local bs 00:14:29.864 04:10:31 -- common/autotest_common.sh@1370 -- # local nb 00:14:29.864 04:10:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:30.123 04:10:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:30.123 { 00:14:30.123 "name": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:30.123 "aliases": [ 00:14:30.123 "lvs/nvme0n1p0" 00:14:30.123 ], 00:14:30.123 "product_name": "Logical Volume", 00:14:30.123 "block_size": 4096, 00:14:30.123 "num_blocks": 26476544, 00:14:30.123 "uuid": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:30.123 "assigned_rate_limits": { 00:14:30.123 "rw_ios_per_sec": 0, 00:14:30.123 "rw_mbytes_per_sec": 0, 00:14:30.123 "r_mbytes_per_sec": 0, 00:14:30.123 "w_mbytes_per_sec": 0 00:14:30.123 }, 00:14:30.123 "claimed": false, 00:14:30.123 "zoned": false, 00:14:30.123 "supported_io_types": { 00:14:30.123 "read": true, 00:14:30.123 "write": true, 00:14:30.123 "unmap": true, 00:14:30.123 "write_zeroes": true, 00:14:30.123 "flush": false, 00:14:30.123 "reset": true, 00:14:30.123 "compare": false, 00:14:30.123 "compare_and_write": false, 00:14:30.123 "abort": false, 00:14:30.123 "nvme_admin": false, 00:14:30.123 "nvme_io": false 00:14:30.123 }, 00:14:30.123 "driver_specific": { 00:14:30.123 "lvol": { 00:14:30.123 "lvol_store_uuid": "2c4ca93e-b469-4aa7-8d7d-104027f3e774", 00:14:30.123 "base_bdev": "nvme0n1", 00:14:30.123 "thin_provision": true, 00:14:30.123 "snapshot": false, 00:14:30.123 "clone": false, 00:14:30.123 "esnap_clone": false 00:14:30.123 } 00:14:30.123 } 00:14:30.123 } 00:14:30.123 ]' 00:14:30.123 04:10:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:30.123 04:10:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:30.123 04:10:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:30.123 04:10:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:30.123 04:10:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:30.123 04:10:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:30.123 04:10:31 -- ftl/common.sh@48 -- # cache_size=5171 00:14:30.123 04:10:31 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:30.382 04:10:31 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:14:30.382 04:10:31 -- ftl/bdevperf.sh@26 -- # get_bdev_size 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:30.382 04:10:31 -- common/autotest_common.sh@1367 -- # local bdev_name=328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:30.382 04:10:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:30.382 04:10:31 -- common/autotest_common.sh@1369 -- # local bs 00:14:30.382 04:10:31 -- common/autotest_common.sh@1370 -- # local nb 00:14:30.382 04:10:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 328632c7-f775-4e00-9c81-15c6e00eb90f 00:14:30.382 04:10:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:30.382 { 00:14:30.382 "name": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:30.382 "aliases": [ 00:14:30.382 "lvs/nvme0n1p0" 00:14:30.382 ], 00:14:30.382 "product_name": "Logical Volume", 00:14:30.382 "block_size": 4096, 00:14:30.382 "num_blocks": 26476544, 00:14:30.382 "uuid": "328632c7-f775-4e00-9c81-15c6e00eb90f", 00:14:30.382 "assigned_rate_limits": { 00:14:30.382 "rw_ios_per_sec": 0, 00:14:30.382 "rw_mbytes_per_sec": 0, 00:14:30.382 "r_mbytes_per_sec": 0, 00:14:30.382 "w_mbytes_per_sec": 0 00:14:30.382 }, 00:14:30.382 "claimed": false, 00:14:30.382 "zoned": false, 00:14:30.382 "supported_io_types": { 00:14:30.382 "read": true, 00:14:30.382 "write": true, 00:14:30.382 "unmap": true, 00:14:30.382 "write_zeroes": true, 00:14:30.382 "flush": false, 00:14:30.382 "reset": true, 00:14:30.382 "compare": false, 00:14:30.382 "compare_and_write": false, 00:14:30.382 "abort": false, 00:14:30.382 "nvme_admin": false, 00:14:30.382 "nvme_io": false 00:14:30.382 }, 00:14:30.382 "driver_specific": { 00:14:30.382 "lvol": { 00:14:30.382 "lvol_store_uuid": "2c4ca93e-b469-4aa7-8d7d-104027f3e774", 00:14:30.382 "base_bdev": "nvme0n1", 00:14:30.382 "thin_provision": true, 00:14:30.382 "snapshot": false, 00:14:30.382 "clone": false, 00:14:30.382 "esnap_clone": false 00:14:30.382 } 00:14:30.382 } 00:14:30.382 } 00:14:30.382 ]' 00:14:30.382 04:10:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:30.642 04:10:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:30.642 04:10:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:30.642 04:10:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:30.642 04:10:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:30.642 04:10:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:30.642 04:10:32 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:14:30.642 04:10:32 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 328632c7-f775-4e00-9c81-15c6e00eb90f -c nvc0n1p0 --l2p_dram_limit 20 00:14:30.642 [2024-11-26 04:10:32.358169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.358214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:30.642 [2024-11-26 04:10:32.358228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:30.642 [2024-11-26 04:10:32.358235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.358280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.358288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:30.642 [2024-11-26 04:10:32.358299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:14:30.642 [2024-11-26 04:10:32.358304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.358324] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:30.642 [2024-11-26 04:10:32.358692] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:30.642 [2024-11-26 04:10:32.358734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.358750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:30.642 [2024-11-26 04:10:32.358767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:14:30.642 [2024-11-26 04:10:32.358788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.359179] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 76c2d783-efe8-4819-a0d7-bfbb6e8064e2 00:14:30.642 [2024-11-26 04:10:32.360186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.360291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:30.642 [2024-11-26 04:10:32.360342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:14:30.642 [2024-11-26 04:10:32.360364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.365122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.365154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:30.642 [2024-11-26 04:10:32.365161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:14:30.642 [2024-11-26 04:10:32.365172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.365248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.365260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:30.642 [2024-11-26 04:10:32.365266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:14:30.642 [2024-11-26 04:10:32.365274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.365313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.365322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:30.642 [2024-11-26 04:10:32.365328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:14:30.642 [2024-11-26 04:10:32.365335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.365352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:30.642 [2024-11-26 04:10:32.366629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.366650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:30.642 [2024-11-26 04:10:32.366659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:14:30.642 [2024-11-26 04:10:32.366666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.366692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.366698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:30.642 [2024-11-26 04:10:32.366707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:30.642 [2024-11-26 04:10:32.366712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.366730] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:30.642 [2024-11-26 04:10:32.366826] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:30.642 [2024-11-26 04:10:32.366837] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:30.642 [2024-11-26 04:10:32.366845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:30.642 [2024-11-26 04:10:32.366855] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:30.642 [2024-11-26 04:10:32.366864] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:30.642 [2024-11-26 04:10:32.366871] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:30.642 [2024-11-26 04:10:32.366878] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:30.642 [2024-11-26 04:10:32.366888] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:30.642 [2024-11-26 04:10:32.366893] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:30.642 [2024-11-26 04:10:32.366899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.642 [2024-11-26 04:10:32.366906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:30.642 [2024-11-26 04:10:32.366913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:14:30.642 [2024-11-26 04:10:32.366918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.642 [2024-11-26 04:10:32.366966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.643 [2024-11-26 04:10:32.366972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:30.643 [2024-11-26 04:10:32.366983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:30.643 [2024-11-26 04:10:32.366994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.643 [2024-11-26 04:10:32.367052] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:30.643 [2024-11-26 04:10:32.367059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:30.643 [2024-11-26 04:10:32.367068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:30.643 [2024-11-26 04:10:32.367086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:30.643 [2024-11-26 04:10:32.367104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:30.643 [2024-11-26 04:10:32.367115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:30.643 [2024-11-26 04:10:32.367120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:30.643 [2024-11-26 04:10:32.367127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:30.643 [2024-11-26 04:10:32.367132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:30.643 [2024-11-26 04:10:32.367140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:30.643 [2024-11-26 04:10:32.367146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:30.643 [2024-11-26 04:10:32.367157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:30.643 [2024-11-26 04:10:32.367163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:30.643 [2024-11-26 04:10:32.367176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:30.643 [2024-11-26 04:10:32.367181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:30.643 [2024-11-26 04:10:32.367192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:30.643 [2024-11-26 04:10:32.367209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:30.643 [2024-11-26 04:10:32.367226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:30.643 [2024-11-26 04:10:32.367245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:30.643 [2024-11-26 04:10:32.367261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:30.643 [2024-11-26 04:10:32.367272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:30.643 [2024-11-26 04:10:32.367279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:30.643 [2024-11-26 04:10:32.367285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:30.643 [2024-11-26 04:10:32.367291] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:30.643 [2024-11-26 04:10:32.367298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:30.643 [2024-11-26 04:10:32.367305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:30.643 [2024-11-26 04:10:32.367446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:30.643 [2024-11-26 04:10:32.367452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:30.643 [2024-11-26 04:10:32.367459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:30.643 [2024-11-26 04:10:32.367466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:30.643 [2024-11-26 04:10:32.367473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:30.643 [2024-11-26 04:10:32.367479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:30.643 [2024-11-26 04:10:32.367488] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:30.643 [2024-11-26 04:10:32.367498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:30.643 [2024-11-26 04:10:32.367522] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:30.643 [2024-11-26 04:10:32.367529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:30.643 [2024-11-26 04:10:32.367536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:30.643 [2024-11-26 04:10:32.367542] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:30.643 [2024-11-26 04:10:32.367550] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:30.643 [2024-11-26 04:10:32.367556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:30.643 [2024-11-26 04:10:32.367565] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:30.643 [2024-11-26 04:10:32.367571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:30.643 [2024-11-26 04:10:32.367579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:30.643 [2024-11-26 04:10:32.367585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:30.643 [2024-11-26 04:10:32.367593] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:30.643 [2024-11-26 04:10:32.367600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:30.643 [2024-11-26 04:10:32.367608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:30.643 [2024-11-26 04:10:32.367614] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:30.643 [2024-11-26 04:10:32.367623] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:30.643 [2024-11-26 04:10:32.367632] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:30.643 [2024-11-26 04:10:32.367639] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:30.643 [2024-11-26 04:10:32.367645] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:30.643 [2024-11-26 04:10:32.367652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:30.643 [2024-11-26 04:10:32.367659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.643 [2024-11-26 04:10:32.367668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:30.643 [2024-11-26 04:10:32.367674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:14:30.644 [2024-11-26 04:10:32.367680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.372957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.372986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:30.644 [2024-11-26 04:10:32.372994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.253 ms 00:14:30.644 [2024-11-26 04:10:32.373003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.373066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.373074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:30.644 [2024-11-26 04:10:32.373081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:30.644 [2024-11-26 04:10:32.373088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.390328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.390388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:30.644 [2024-11-26 04:10:32.390417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.207 ms 00:14:30.644 [2024-11-26 04:10:32.390432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.390476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.390496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:30.644 [2024-11-26 04:10:32.390532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:30.644 [2024-11-26 04:10:32.390546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.390966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.391002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:30.644 [2024-11-26 04:10:32.391018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:14:30.644 [2024-11-26 04:10:32.391033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.391203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.391220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:30.644 [2024-11-26 04:10:32.391239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:14:30.644 [2024-11-26 04:10:32.391255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.644 [2024-11-26 04:10:32.397577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.644 [2024-11-26 04:10:32.397691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:30.644 [2024-11-26 04:10:32.397702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.297 ms 00:14:30.644 [2024-11-26 04:10:32.397714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.902 [2024-11-26 04:10:32.404266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:14:30.902 [2024-11-26 04:10:32.408520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.902 [2024-11-26 04:10:32.408540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:30.902 [2024-11-26 04:10:32.408550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.755 ms 00:14:30.902 [2024-11-26 04:10:32.408557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.902 [2024-11-26 04:10:32.463213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:30.902 [2024-11-26 04:10:32.463252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:30.902 [2024-11-26 04:10:32.463263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.631 ms 00:14:30.902 [2024-11-26 04:10:32.463272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:30.902 [2024-11-26 04:10:32.463291] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:30.902 [2024-11-26 04:10:32.463300] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:33.430 [2024-11-26 04:10:34.678242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.678604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:33.430 [2024-11-26 04:10:34.678652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2214.925 ms 00:14:33.430 [2024-11-26 04:10:34.678671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.679039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.679063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:33.430 [2024-11-26 04:10:34.679091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:14:33.430 [2024-11-26 04:10:34.679109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.683986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.684044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:33.430 [2024-11-26 04:10:34.684080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:14:33.430 [2024-11-26 04:10:34.684095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.688118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.688171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:33.430 [2024-11-26 04:10:34.688194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:14:33.430 [2024-11-26 04:10:34.688207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.688590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.688624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:33.430 [2024-11-26 04:10:34.688649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:14:33.430 [2024-11-26 04:10:34.688663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.708834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.708979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:33.430 [2024-11-26 04:10:34.708999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.136 ms 00:14:33.430 [2024-11-26 04:10:34.709007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.712726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.712767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:33.430 [2024-11-26 04:10:34.712785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:14:33.430 [2024-11-26 04:10:34.712797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.713993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.714025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:33.430 [2024-11-26 04:10:34.714037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:14:33.430 [2024-11-26 04:10:34.714044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.716973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.717006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:33.430 [2024-11-26 04:10:34.717018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:14:33.430 [2024-11-26 04:10:34.717026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.717062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.717072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:33.430 [2024-11-26 04:10:34.717083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:33.430 [2024-11-26 04:10:34.717091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.717155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:33.430 [2024-11-26 04:10:34.717163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:33.430 [2024-11-26 04:10:34.717175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:33.430 [2024-11-26 04:10:34.717182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:33.430 [2024-11-26 04:10:34.718025] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2359.485 ms, result 0 00:14:33.430 { 00:14:33.430 "name": "ftl0", 00:14:33.430 "uuid": "76c2d783-efe8-4819-a0d7-bfbb6e8064e2" 00:14:33.430 } 00:14:33.430 04:10:34 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:14:33.430 04:10:34 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:14:33.430 04:10:34 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:14:33.430 04:10:34 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:14:33.430 [2024-11-26 04:10:35.003660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:33.430 I/O size of 69632 is greater than zero copy threshold (65536). 00:14:33.430 Zero copy mechanism will not be used. 00:14:33.430 Running I/O for 4 seconds... 00:14:37.698 00:14:37.698 Latency(us) 00:14:37.698 [2024-11-26T04:10:39.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.698 [2024-11-26T04:10:39.466Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:14:37.698 ftl0 : 4.00 3410.41 226.47 0.00 0.00 308.20 143.36 2457.60 00:14:37.698 [2024-11-26T04:10:39.466Z] =================================================================================================================== 00:14:37.698 [2024-11-26T04:10:39.466Z] Total : 3410.41 226.47 0.00 0.00 308.20 143.36 2457.60 00:14:37.698 [2024-11-26 04:10:39.010094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:14:37.698 0 00:14:37.698 04:10:39 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:14:37.698 [2024-11-26 04:10:39.116221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:37.698 Running I/O for 4 seconds... 00:14:41.883 00:14:41.883 Latency(us) 00:14:41.883 [2024-11-26T04:10:43.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.883 [2024-11-26T04:10:43.651Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.883 ftl0 : 4.01 11210.19 43.79 0.00 0.00 11397.90 281.99 33070.47 00:14:41.883 [2024-11-26T04:10:43.651Z] =================================================================================================================== 00:14:41.883 [2024-11-26T04:10:43.651Z] Total : 11210.19 43.79 0.00 0.00 11397.90 0.00 33070.47 00:14:41.883 [2024-11-26 04:10:43.134711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:14:41.883 0 00:14:41.884 04:10:43 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:14:41.884 [2024-11-26 04:10:43.238681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:41.884 Running I/O for 4 seconds... 00:14:46.072 00:14:46.072 Latency(us) 00:14:46.072 [2024-11-26T04:10:47.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.072 [2024-11-26T04:10:47.840Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:46.072 Verification LBA range: start 0x0 length 0x1400000 00:14:46.072 ftl0 : 4.00 12504.05 48.84 0.00 0.00 10211.48 49.82 50412.31 00:14:46.072 [2024-11-26T04:10:47.840Z] =================================================================================================================== 00:14:46.072 [2024-11-26T04:10:47.840Z] Total : 12504.05 48.84 0.00 0.00 10211.48 0.00 50412.31 00:14:46.072 [2024-11-26 04:10:47.250540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:14:46.072 0 00:14:46.072 04:10:47 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:14:46.072 [2024-11-26 04:10:47.440353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.440600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:46.072 [2024-11-26 04:10:47.440709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:46.072 [2024-11-26 04:10:47.440735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.440783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:46.072 [2024-11-26 04:10:47.441211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.441309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:46.072 [2024-11-26 04:10:47.441365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:14:46.072 [2024-11-26 04:10:47.441391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.443238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.443348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:46.072 [2024-11-26 04:10:47.443406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:14:46.072 [2024-11-26 04:10:47.443430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.584059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.584257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:46.072 [2024-11-26 04:10:47.584313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.604 ms 00:14:46.072 [2024-11-26 04:10:47.584337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.590469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.590579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:46.072 [2024-11-26 04:10:47.590627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:14:46.072 [2024-11-26 04:10:47.590654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.592807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.592840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:46.072 [2024-11-26 04:10:47.592849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:14:46.072 [2024-11-26 04:10:47.592858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.597713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.597817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:46.072 [2024-11-26 04:10:47.597832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:14:46.072 [2024-11-26 04:10:47.597844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.597951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.597963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:46.072 [2024-11-26 04:10:47.597971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:14:46.072 [2024-11-26 04:10:47.597979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.072 [2024-11-26 04:10:47.600641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.072 [2024-11-26 04:10:47.600673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:46.072 [2024-11-26 04:10:47.600682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:14:46.073 [2024-11-26 04:10:47.600692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.073 [2024-11-26 04:10:47.602592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.073 [2024-11-26 04:10:47.602623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:46.073 [2024-11-26 04:10:47.602632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:14:46.073 [2024-11-26 04:10:47.602640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.073 [2024-11-26 04:10:47.604320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.073 [2024-11-26 04:10:47.604415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:46.073 [2024-11-26 04:10:47.604428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.653 ms 00:14:46.073 [2024-11-26 04:10:47.604436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.073 [2024-11-26 04:10:47.606018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.073 [2024-11-26 04:10:47.606050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:46.073 [2024-11-26 04:10:47.606059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:14:46.073 [2024-11-26 04:10:47.606066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.073 [2024-11-26 04:10:47.606093] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:46.073 [2024-11-26 04:10:47.606110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:46.073 [2024-11-26 04:10:47.606820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.606995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.607002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:46.074 [2024-11-26 04:10:47.607019] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:46.074 [2024-11-26 04:10:47.607027] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76c2d783-efe8-4819-a0d7-bfbb6e8064e2 00:14:46.074 [2024-11-26 04:10:47.607037] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:46.074 [2024-11-26 04:10:47.607044] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:46.074 [2024-11-26 04:10:47.607052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:46.074 [2024-11-26 04:10:47.607059] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:46.074 [2024-11-26 04:10:47.607069] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:46.074 [2024-11-26 04:10:47.607076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:46.074 [2024-11-26 04:10:47.607084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:46.074 [2024-11-26 04:10:47.607090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:46.074 [2024-11-26 04:10:47.607098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:46.074 [2024-11-26 04:10:47.607105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.074 [2024-11-26 04:10:47.607117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:46.074 [2024-11-26 04:10:47.607127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:14:46.074 [2024-11-26 04:10:47.607136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.608575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.074 [2024-11-26 04:10:47.608594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:46.074 [2024-11-26 04:10:47.608604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:14:46.074 [2024-11-26 04:10:47.608613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.608676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:46.074 [2024-11-26 04:10:47.608689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:46.074 [2024-11-26 04:10:47.608697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:46.074 [2024-11-26 04:10:47.608705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.613762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.613801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:46.074 [2024-11-26 04:10:47.613810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.613819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.613870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.613881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:46.074 [2024-11-26 04:10:47.613889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.613899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.613958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.613969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:46.074 [2024-11-26 04:10:47.613977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.613985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.614003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.614014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:46.074 [2024-11-26 04:10:47.614024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.614032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.622224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.622375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:46.074 [2024-11-26 04:10:47.622391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.622401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:46.074 [2024-11-26 04:10:47.626139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:46.074 [2024-11-26 04:10:47.626207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:46.074 [2024-11-26 04:10:47.626274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:46.074 [2024-11-26 04:10:47.626371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:46.074 [2024-11-26 04:10:47.626425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:46.074 [2024-11-26 04:10:47.626485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.074 [2024-11-26 04:10:47.626563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:46.074 [2024-11-26 04:10:47.626571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.074 [2024-11-26 04:10:47.626581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.074 [2024-11-26 04:10:47.626696] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 186.314 ms, result 0 00:14:46.074 true 00:14:46.074 04:10:47 -- ftl/bdevperf.sh@37 -- # killprocess 82720 00:14:46.074 04:10:47 -- common/autotest_common.sh@936 -- # '[' -z 82720 ']' 00:14:46.074 04:10:47 -- common/autotest_common.sh@940 -- # kill -0 82720 00:14:46.074 04:10:47 -- common/autotest_common.sh@941 -- # uname 00:14:46.074 04:10:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.074 04:10:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82720 00:14:46.074 04:10:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.074 killing process with pid 82720 00:14:46.074 04:10:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.074 04:10:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82720' 00:14:46.074 04:10:47 -- common/autotest_common.sh@955 -- # kill 82720 00:14:46.074 Received shutdown signal, test time was about 4.000000 seconds 00:14:46.074 00:14:46.074 Latency(us) 00:14:46.074 [2024-11-26T04:10:47.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.074 [2024-11-26T04:10:47.842Z] =================================================================================================================== 00:14:46.075 [2024-11-26T04:10:47.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.075 04:10:47 -- common/autotest_common.sh@960 -- # wait 82720 00:14:46.337 04:10:47 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:14:46.337 04:10:47 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:14:46.337 04:10:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.337 04:10:47 -- common/autotest_common.sh@10 -- # set +x 00:14:46.337 Remove shared memory files 00:14:46.337 04:10:47 -- ftl/bdevperf.sh@41 -- # remove_shm 00:14:46.337 04:10:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:14:46.337 04:10:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:14:46.337 04:10:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:14:46.337 04:10:47 -- ftl/common.sh@207 -- # rm -f rm -f 00:14:46.337 04:10:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:14:46.337 04:10:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:14:46.337 ************************************ 00:14:46.337 END TEST ftl_bdevperf 00:14:46.337 ************************************ 00:14:46.337 00:14:46.337 real 0m19.293s 00:14:46.337 user 0m21.812s 00:14:46.337 sys 0m0.789s 00:14:46.337 04:10:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.337 04:10:47 -- common/autotest_common.sh@10 -- # set +x 00:14:46.337 04:10:48 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:14:46.337 04:10:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:46.337 04:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.337 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:14:46.337 ************************************ 00:14:46.337 START TEST ftl_trim 00:14:46.337 ************************************ 00:14:46.337 04:10:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:14:46.337 * Looking for test storage... 00:14:46.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.337 04:10:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.337 04:10:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.337 04:10:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:46.599 04:10:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:46.599 04:10:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:46.599 04:10:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:46.599 04:10:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:46.599 04:10:48 -- scripts/common.sh@335 -- # IFS=.-: 00:14:46.599 04:10:48 -- scripts/common.sh@335 -- # read -ra ver1 00:14:46.599 04:10:48 -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.599 04:10:48 -- scripts/common.sh@336 -- # read -ra ver2 00:14:46.599 04:10:48 -- scripts/common.sh@337 -- # local 'op=<' 00:14:46.599 04:10:48 -- scripts/common.sh@339 -- # ver1_l=2 00:14:46.599 04:10:48 -- scripts/common.sh@340 -- # ver2_l=1 00:14:46.599 04:10:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:46.599 04:10:48 -- scripts/common.sh@343 -- # case "$op" in 00:14:46.599 04:10:48 -- scripts/common.sh@344 -- # : 1 00:14:46.599 04:10:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:46.599 04:10:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.600 04:10:48 -- scripts/common.sh@364 -- # decimal 1 00:14:46.600 04:10:48 -- scripts/common.sh@352 -- # local d=1 00:14:46.600 04:10:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.600 04:10:48 -- scripts/common.sh@354 -- # echo 1 00:14:46.600 04:10:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:46.600 04:10:48 -- scripts/common.sh@365 -- # decimal 2 00:14:46.600 04:10:48 -- scripts/common.sh@352 -- # local d=2 00:14:46.600 04:10:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.600 04:10:48 -- scripts/common.sh@354 -- # echo 2 00:14:46.600 04:10:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:46.600 04:10:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:46.600 04:10:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:46.600 04:10:48 -- scripts/common.sh@367 -- # return 0 00:14:46.600 04:10:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.600 04:10:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:46.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.600 --rc genhtml_branch_coverage=1 00:14:46.600 --rc genhtml_function_coverage=1 00:14:46.600 --rc genhtml_legend=1 00:14:46.600 --rc geninfo_all_blocks=1 00:14:46.600 --rc geninfo_unexecuted_blocks=1 00:14:46.600 00:14:46.600 ' 00:14:46.600 04:10:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:46.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.600 --rc genhtml_branch_coverage=1 00:14:46.600 --rc genhtml_function_coverage=1 00:14:46.600 --rc genhtml_legend=1 00:14:46.600 --rc geninfo_all_blocks=1 00:14:46.600 --rc geninfo_unexecuted_blocks=1 00:14:46.600 00:14:46.600 ' 00:14:46.600 04:10:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:46.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.600 --rc genhtml_branch_coverage=1 00:14:46.600 --rc genhtml_function_coverage=1 00:14:46.600 --rc genhtml_legend=1 00:14:46.600 --rc geninfo_all_blocks=1 00:14:46.600 --rc geninfo_unexecuted_blocks=1 00:14:46.600 00:14:46.600 ' 00:14:46.600 04:10:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:46.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.600 --rc genhtml_branch_coverage=1 00:14:46.600 --rc genhtml_function_coverage=1 00:14:46.600 --rc genhtml_legend=1 00:14:46.600 --rc geninfo_all_blocks=1 00:14:46.600 --rc geninfo_unexecuted_blocks=1 00:14:46.600 00:14:46.600 ' 00:14:46.600 04:10:48 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:46.600 04:10:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:14:46.600 04:10:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.600 04:10:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:46.600 04:10:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:46.600 04:10:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:46.600 04:10:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.600 04:10:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:46.600 04:10:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:46.600 04:10:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.600 04:10:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.600 04:10:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:46.600 04:10:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:46.600 04:10:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:46.600 04:10:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:46.600 04:10:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:46.600 04:10:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:46.600 04:10:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.600 04:10:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:46.600 04:10:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:46.600 04:10:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:46.600 04:10:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:46.600 04:10:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:46.600 04:10:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:46.600 04:10:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:46.600 04:10:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:46.600 04:10:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:46.600 04:10:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:46.600 04:10:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:46.600 04:10:48 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.600 04:10:48 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:14:46.600 04:10:48 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:14:46.600 04:10:48 -- ftl/trim.sh@25 -- # timeout=240 00:14:46.600 04:10:48 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:14:46.600 04:10:48 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:14:46.600 04:10:48 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:14:46.600 04:10:48 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:14:46.600 04:10:48 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:14:46.600 04:10:48 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:46.600 04:10:48 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:46.600 04:10:48 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:46.600 04:10:48 -- ftl/trim.sh@40 -- # svcpid=83053 00:14:46.600 04:10:48 -- ftl/trim.sh@41 -- # waitforlisten 83053 00:14:46.600 04:10:48 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:14:46.600 04:10:48 -- common/autotest_common.sh@829 -- # '[' -z 83053 ']' 00:14:46.600 04:10:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.600 04:10:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.600 04:10:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.600 04:10:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.600 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:14:46.600 [2024-11-26 04:10:48.239368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:46.600 [2024-11-26 04:10:48.239615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83053 ] 00:14:46.861 [2024-11-26 04:10:48.384266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.861 [2024-11-26 04:10:48.416925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:46.861 [2024-11-26 04:10:48.417438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.861 [2024-11-26 04:10:48.417780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.861 [2024-11-26 04:10:48.417837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.431 04:10:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.431 04:10:49 -- common/autotest_common.sh@862 -- # return 0 00:14:47.431 04:10:49 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:47.431 04:10:49 -- ftl/common.sh@54 -- # local name=nvme0 00:14:47.431 04:10:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:47.431 04:10:49 -- ftl/common.sh@56 -- # local size=103424 00:14:47.431 04:10:49 -- ftl/common.sh@59 -- # local base_bdev 00:14:47.431 04:10:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:47.689 04:10:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:47.689 04:10:49 -- ftl/common.sh@62 -- # local base_size 00:14:47.689 04:10:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:47.689 04:10:49 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:47.689 04:10:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:47.689 04:10:49 -- common/autotest_common.sh@1369 -- # local bs 00:14:47.689 04:10:49 -- common/autotest_common.sh@1370 -- # local nb 00:14:47.689 04:10:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:47.948 04:10:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:47.948 { 00:14:47.948 "name": "nvme0n1", 00:14:47.948 "aliases": [ 00:14:47.948 "a48eefe7-0e52-4e83-8840-1af11666dd5d" 00:14:47.948 ], 00:14:47.948 "product_name": "NVMe disk", 00:14:47.948 "block_size": 4096, 00:14:47.948 "num_blocks": 1310720, 00:14:47.948 "uuid": "a48eefe7-0e52-4e83-8840-1af11666dd5d", 00:14:47.948 "assigned_rate_limits": { 00:14:47.948 "rw_ios_per_sec": 0, 00:14:47.948 "rw_mbytes_per_sec": 0, 00:14:47.948 "r_mbytes_per_sec": 0, 00:14:47.948 "w_mbytes_per_sec": 0 00:14:47.948 }, 00:14:47.948 "claimed": true, 00:14:47.948 "claim_type": "read_many_write_one", 00:14:47.948 "zoned": false, 00:14:47.948 "supported_io_types": { 00:14:47.948 "read": true, 00:14:47.948 "write": true, 00:14:47.948 "unmap": true, 00:14:47.948 "write_zeroes": true, 00:14:47.948 "flush": true, 00:14:47.948 "reset": true, 00:14:47.948 "compare": true, 00:14:47.948 "compare_and_write": false, 00:14:47.948 "abort": true, 00:14:47.948 "nvme_admin": true, 00:14:47.948 "nvme_io": true 00:14:47.948 }, 00:14:47.948 "driver_specific": { 00:14:47.948 "nvme": [ 00:14:47.948 { 00:14:47.948 "pci_address": "0000:00:07.0", 00:14:47.948 "trid": { 00:14:47.948 "trtype": "PCIe", 00:14:47.948 "traddr": "0000:00:07.0" 00:14:47.948 }, 00:14:47.948 "ctrlr_data": { 00:14:47.948 "cntlid": 0, 00:14:47.948 "vendor_id": "0x1b36", 00:14:47.948 "model_number": "QEMU NVMe Ctrl", 00:14:47.948 "serial_number": "12341", 00:14:47.948 "firmware_revision": "8.0.0", 00:14:47.948 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:47.948 "oacs": { 00:14:47.948 "security": 0, 00:14:47.948 "format": 1, 00:14:47.948 "firmware": 0, 00:14:47.948 "ns_manage": 1 00:14:47.948 }, 00:14:47.948 "multi_ctrlr": false, 00:14:47.948 "ana_reporting": false 00:14:47.948 }, 00:14:47.948 "vs": { 00:14:47.948 "nvme_version": "1.4" 00:14:47.948 }, 00:14:47.948 "ns_data": { 00:14:47.948 "id": 1, 00:14:47.948 "can_share": false 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ], 00:14:47.948 "mp_policy": "active_passive" 00:14:47.948 } 00:14:47.948 } 00:14:47.948 ]' 00:14:47.948 04:10:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:47.948 04:10:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:47.948 04:10:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:47.948 04:10:49 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:47.948 04:10:49 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:47.948 04:10:49 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:47.948 04:10:49 -- ftl/common.sh@63 -- # base_size=5120 00:14:47.948 04:10:49 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:47.948 04:10:49 -- ftl/common.sh@67 -- # clear_lvols 00:14:47.948 04:10:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:47.948 04:10:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:48.208 04:10:49 -- ftl/common.sh@28 -- # stores=2c4ca93e-b469-4aa7-8d7d-104027f3e774 00:14:48.208 04:10:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:14:48.208 04:10:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c4ca93e-b469-4aa7-8d7d-104027f3e774 00:14:48.466 04:10:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:48.466 04:10:50 -- ftl/common.sh@68 -- # lvs=33be9b9a-ef14-4859-b5d7-afbf78f2f904 00:14:48.466 04:10:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33be9b9a-ef14-4859-b5d7-afbf78f2f904 00:14:48.725 04:10:50 -- ftl/trim.sh@43 -- # split_bdev=86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.725 04:10:50 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.725 04:10:50 -- ftl/common.sh@35 -- # local name=nvc0 00:14:48.725 04:10:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:48.725 04:10:50 -- ftl/common.sh@37 -- # local base_bdev=86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.725 04:10:50 -- ftl/common.sh@38 -- # local cache_size= 00:14:48.725 04:10:50 -- ftl/common.sh@41 -- # get_bdev_size 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.725 04:10:50 -- common/autotest_common.sh@1367 -- # local bdev_name=86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.725 04:10:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:48.725 04:10:50 -- common/autotest_common.sh@1369 -- # local bs 00:14:48.725 04:10:50 -- common/autotest_common.sh@1370 -- # local nb 00:14:48.725 04:10:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:48.984 04:10:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:48.984 { 00:14:48.984 "name": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:48.984 "aliases": [ 00:14:48.984 "lvs/nvme0n1p0" 00:14:48.984 ], 00:14:48.984 "product_name": "Logical Volume", 00:14:48.984 "block_size": 4096, 00:14:48.984 "num_blocks": 26476544, 00:14:48.984 "uuid": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:48.984 "assigned_rate_limits": { 00:14:48.984 "rw_ios_per_sec": 0, 00:14:48.984 "rw_mbytes_per_sec": 0, 00:14:48.984 "r_mbytes_per_sec": 0, 00:14:48.984 "w_mbytes_per_sec": 0 00:14:48.984 }, 00:14:48.984 "claimed": false, 00:14:48.984 "zoned": false, 00:14:48.984 "supported_io_types": { 00:14:48.984 "read": true, 00:14:48.984 "write": true, 00:14:48.984 "unmap": true, 00:14:48.984 "write_zeroes": true, 00:14:48.984 "flush": false, 00:14:48.984 "reset": true, 00:14:48.984 "compare": false, 00:14:48.984 "compare_and_write": false, 00:14:48.984 "abort": false, 00:14:48.984 "nvme_admin": false, 00:14:48.984 "nvme_io": false 00:14:48.984 }, 00:14:48.984 "driver_specific": { 00:14:48.984 "lvol": { 00:14:48.984 "lvol_store_uuid": "33be9b9a-ef14-4859-b5d7-afbf78f2f904", 00:14:48.984 "base_bdev": "nvme0n1", 00:14:48.984 "thin_provision": true, 00:14:48.984 "snapshot": false, 00:14:48.984 "clone": false, 00:14:48.984 "esnap_clone": false 00:14:48.984 } 00:14:48.984 } 00:14:48.984 } 00:14:48.984 ]' 00:14:48.984 04:10:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:48.984 04:10:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:48.985 04:10:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:48.985 04:10:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:48.985 04:10:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:48.985 04:10:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:48.985 04:10:50 -- ftl/common.sh@41 -- # local base_size=5171 00:14:48.985 04:10:50 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:48.985 04:10:50 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:49.244 04:10:50 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:49.244 04:10:50 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:49.244 04:10:50 -- ftl/common.sh@48 -- # get_bdev_size 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.244 04:10:50 -- common/autotest_common.sh@1367 -- # local bdev_name=86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.244 04:10:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:49.244 04:10:50 -- common/autotest_common.sh@1369 -- # local bs 00:14:49.244 04:10:50 -- common/autotest_common.sh@1370 -- # local nb 00:14:49.244 04:10:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.503 04:10:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:49.503 { 00:14:49.503 "name": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:49.503 "aliases": [ 00:14:49.503 "lvs/nvme0n1p0" 00:14:49.503 ], 00:14:49.503 "product_name": "Logical Volume", 00:14:49.503 "block_size": 4096, 00:14:49.503 "num_blocks": 26476544, 00:14:49.503 "uuid": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:49.503 "assigned_rate_limits": { 00:14:49.503 "rw_ios_per_sec": 0, 00:14:49.503 "rw_mbytes_per_sec": 0, 00:14:49.503 "r_mbytes_per_sec": 0, 00:14:49.503 "w_mbytes_per_sec": 0 00:14:49.503 }, 00:14:49.503 "claimed": false, 00:14:49.503 "zoned": false, 00:14:49.503 "supported_io_types": { 00:14:49.503 "read": true, 00:14:49.503 "write": true, 00:14:49.503 "unmap": true, 00:14:49.503 "write_zeroes": true, 00:14:49.503 "flush": false, 00:14:49.503 "reset": true, 00:14:49.503 "compare": false, 00:14:49.503 "compare_and_write": false, 00:14:49.503 "abort": false, 00:14:49.503 "nvme_admin": false, 00:14:49.503 "nvme_io": false 00:14:49.503 }, 00:14:49.503 "driver_specific": { 00:14:49.503 "lvol": { 00:14:49.503 "lvol_store_uuid": "33be9b9a-ef14-4859-b5d7-afbf78f2f904", 00:14:49.503 "base_bdev": "nvme0n1", 00:14:49.503 "thin_provision": true, 00:14:49.503 "snapshot": false, 00:14:49.503 "clone": false, 00:14:49.503 "esnap_clone": false 00:14:49.503 } 00:14:49.503 } 00:14:49.503 } 00:14:49.503 ]' 00:14:49.503 04:10:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:49.503 04:10:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:49.503 04:10:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:49.503 04:10:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:49.503 04:10:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:49.503 04:10:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:49.503 04:10:51 -- ftl/common.sh@48 -- # cache_size=5171 00:14:49.503 04:10:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:49.763 04:10:51 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:14:49.763 04:10:51 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:14:49.763 04:10:51 -- ftl/trim.sh@47 -- # get_bdev_size 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.763 04:10:51 -- common/autotest_common.sh@1367 -- # local bdev_name=86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.763 04:10:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:49.763 04:10:51 -- common/autotest_common.sh@1369 -- # local bs 00:14:49.763 04:10:51 -- common/autotest_common.sh@1370 -- # local nb 00:14:49.763 04:10:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 00:14:49.763 04:10:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:49.763 { 00:14:49.763 "name": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:49.763 "aliases": [ 00:14:49.763 "lvs/nvme0n1p0" 00:14:49.763 ], 00:14:49.763 "product_name": "Logical Volume", 00:14:49.763 "block_size": 4096, 00:14:49.763 "num_blocks": 26476544, 00:14:49.763 "uuid": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:49.763 "assigned_rate_limits": { 00:14:49.763 "rw_ios_per_sec": 0, 00:14:49.763 "rw_mbytes_per_sec": 0, 00:14:49.763 "r_mbytes_per_sec": 0, 00:14:49.763 "w_mbytes_per_sec": 0 00:14:49.763 }, 00:14:49.763 "claimed": false, 00:14:49.763 "zoned": false, 00:14:49.763 "supported_io_types": { 00:14:49.763 "read": true, 00:14:49.763 "write": true, 00:14:49.763 "unmap": true, 00:14:49.763 "write_zeroes": true, 00:14:49.763 "flush": false, 00:14:49.763 "reset": true, 00:14:49.763 "compare": false, 00:14:49.763 "compare_and_write": false, 00:14:49.763 "abort": false, 00:14:49.763 "nvme_admin": false, 00:14:49.763 "nvme_io": false 00:14:49.763 }, 00:14:49.763 "driver_specific": { 00:14:49.763 "lvol": { 00:14:49.763 "lvol_store_uuid": "33be9b9a-ef14-4859-b5d7-afbf78f2f904", 00:14:49.763 "base_bdev": "nvme0n1", 00:14:49.763 "thin_provision": true, 00:14:49.763 "snapshot": false, 00:14:49.763 "clone": false, 00:14:49.763 "esnap_clone": false 00:14:49.763 } 00:14:49.763 } 00:14:49.763 } 00:14:49.763 ]' 00:14:49.763 04:10:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:50.024 04:10:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:50.024 04:10:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:50.024 04:10:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:50.024 04:10:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:50.024 04:10:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:50.024 04:10:51 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:14:50.024 04:10:51 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 86b3e0fd-18c2-43fe-a836-30d9fc02abf2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:14:50.024 [2024-11-26 04:10:51.730647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.730695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:50.024 [2024-11-26 04:10:51.730708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:50.024 [2024-11-26 04:10:51.730715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.732587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.732627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:50.024 [2024-11-26 04:10:51.732638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:14:50.024 [2024-11-26 04:10:51.732645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.732715] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:50.024 [2024-11-26 04:10:51.732888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:50.024 [2024-11-26 04:10:51.732904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.732910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:50.024 [2024-11-26 04:10:51.732920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:14:50.024 [2024-11-26 04:10:51.732926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.733005] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:14:50.024 [2024-11-26 04:10:51.733972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.733999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:50.024 [2024-11-26 04:10:51.734015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:14:50.024 [2024-11-26 04:10:51.734024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.738753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.738780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:50.024 [2024-11-26 04:10:51.738789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:14:50.024 [2024-11-26 04:10:51.738800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.738911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.738927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:50.024 [2024-11-26 04:10:51.738934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:14:50.024 [2024-11-26 04:10:51.738949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.738976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.738983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:50.024 [2024-11-26 04:10:51.738990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:50.024 [2024-11-26 04:10:51.738996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.739026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:14:50.024 [2024-11-26 04:10:51.740266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.740379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:50.024 [2024-11-26 04:10:51.740394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:14:50.024 [2024-11-26 04:10:51.740400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.740440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.740446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:50.024 [2024-11-26 04:10:51.740455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:50.024 [2024-11-26 04:10:51.740461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.740490] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:50.024 [2024-11-26 04:10:51.740592] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:50.024 [2024-11-26 04:10:51.740610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:50.024 [2024-11-26 04:10:51.740618] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:50.024 [2024-11-26 04:10:51.740627] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:50.024 [2024-11-26 04:10:51.740634] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:50.024 [2024-11-26 04:10:51.740645] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:14:50.024 [2024-11-26 04:10:51.740650] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:50.024 [2024-11-26 04:10:51.740657] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:50.024 [2024-11-26 04:10:51.740663] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:50.024 [2024-11-26 04:10:51.740669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.740675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:50.024 [2024-11-26 04:10:51.740682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:14:50.024 [2024-11-26 04:10:51.740687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.740754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.024 [2024-11-26 04:10:51.740760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:50.024 [2024-11-26 04:10:51.740769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:14:50.024 [2024-11-26 04:10:51.740781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.024 [2024-11-26 04:10:51.740861] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:50.024 [2024-11-26 04:10:51.740875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:50.024 [2024-11-26 04:10:51.740884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:50.024 [2024-11-26 04:10:51.740889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.024 [2024-11-26 04:10:51.740897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:50.024 [2024-11-26 04:10:51.740902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:50.024 [2024-11-26 04:10:51.740908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:14:50.024 [2024-11-26 04:10:51.740913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:50.024 [2024-11-26 04:10:51.740919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:14:50.024 [2024-11-26 04:10:51.740924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:50.024 [2024-11-26 04:10:51.740930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:50.024 [2024-11-26 04:10:51.740935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:14:50.024 [2024-11-26 04:10:51.740943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:50.024 [2024-11-26 04:10:51.740948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:50.024 [2024-11-26 04:10:51.740954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:14:50.024 [2024-11-26 04:10:51.740960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.024 [2024-11-26 04:10:51.740967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:50.024 [2024-11-26 04:10:51.740972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:14:50.024 [2024-11-26 04:10:51.740979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.024 [2024-11-26 04:10:51.740985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:50.025 [2024-11-26 04:10:51.740992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:14:50.025 [2024-11-26 04:10:51.740999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:50.025 [2024-11-26 04:10:51.741012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:50.025 [2024-11-26 04:10:51.741032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:50.025 [2024-11-26 04:10:51.741052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:50.025 [2024-11-26 04:10:51.741072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:50.025 [2024-11-26 04:10:51.741090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:50.025 [2024-11-26 04:10:51.741102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:50.025 [2024-11-26 04:10:51.741109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:14:50.025 [2024-11-26 04:10:51.741114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:50.025 [2024-11-26 04:10:51.741121] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:50.025 [2024-11-26 04:10:51.741127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:50.025 [2024-11-26 04:10:51.741142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:50.025 [2024-11-26 04:10:51.741158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:50.025 [2024-11-26 04:10:51.741164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:50.025 [2024-11-26 04:10:51.741171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:50.025 [2024-11-26 04:10:51.741177] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:50.025 [2024-11-26 04:10:51.741184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:50.025 [2024-11-26 04:10:51.741190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:50.025 [2024-11-26 04:10:51.741198] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:50.025 [2024-11-26 04:10:51.741206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:50.025 [2024-11-26 04:10:51.741215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:14:50.025 [2024-11-26 04:10:51.741222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:14:50.025 [2024-11-26 04:10:51.741230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:14:50.025 [2024-11-26 04:10:51.741236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:14:50.025 [2024-11-26 04:10:51.741243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:14:50.025 [2024-11-26 04:10:51.741249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:14:50.025 [2024-11-26 04:10:51.741256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:14:50.025 [2024-11-26 04:10:51.741263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:14:50.025 [2024-11-26 04:10:51.741271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:14:50.025 [2024-11-26 04:10:51.741277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:14:50.025 [2024-11-26 04:10:51.741285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:14:50.025 [2024-11-26 04:10:51.741291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:14:50.025 [2024-11-26 04:10:51.741298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:14:50.025 [2024-11-26 04:10:51.741304] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:50.025 [2024-11-26 04:10:51.741313] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:50.025 [2024-11-26 04:10:51.741321] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:50.025 [2024-11-26 04:10:51.741328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:50.025 [2024-11-26 04:10:51.741334] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:50.025 [2024-11-26 04:10:51.741342] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:50.025 [2024-11-26 04:10:51.741348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.741355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:50.025 [2024-11-26 04:10:51.741360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:14:50.025 [2024-11-26 04:10:51.741367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.746641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.746743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:50.025 [2024-11-26 04:10:51.746755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.217 ms 00:14:50.025 [2024-11-26 04:10:51.746762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.746847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.746863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:50.025 [2024-11-26 04:10:51.746876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:50.025 [2024-11-26 04:10:51.746882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.754707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.754732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:50.025 [2024-11-26 04:10:51.754746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.801 ms 00:14:50.025 [2024-11-26 04:10:51.754759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.754807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.754823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:50.025 [2024-11-26 04:10:51.754830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:50.025 [2024-11-26 04:10:51.754837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.755107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.755135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:50.025 [2024-11-26 04:10:51.755142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:14:50.025 [2024-11-26 04:10:51.755149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.755244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.755253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:50.025 [2024-11-26 04:10:51.755259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:14:50.025 [2024-11-26 04:10:51.755282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.769472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.025 [2024-11-26 04:10:51.769560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:50.025 [2024-11-26 04:10:51.769600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.159 ms 00:14:50.025 [2024-11-26 04:10:51.769616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.025 [2024-11-26 04:10:51.782321] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:50.283 [2024-11-26 04:10:51.794238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.283 [2024-11-26 04:10:51.794267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:50.283 [2024-11-26 04:10:51.794278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.461 ms 00:14:50.283 [2024-11-26 04:10:51.794286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.283 [2024-11-26 04:10:51.856371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:50.283 [2024-11-26 04:10:51.856421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:50.283 [2024-11-26 04:10:51.856433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.015 ms 00:14:50.283 [2024-11-26 04:10:51.856439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:50.283 [2024-11-26 04:10:51.856484] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:50.283 [2024-11-26 04:10:51.856494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:52.812 [2024-11-26 04:10:54.117977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.118043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:52.812 [2024-11-26 04:10:54.118063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2261.454 ms 00:14:52.812 [2024-11-26 04:10:54.118071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.118279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.118290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:52.812 [2024-11-26 04:10:54.118312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:14:52.812 [2024-11-26 04:10:54.118320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.121484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.121534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:52.812 [2024-11-26 04:10:54.121550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:14:52.812 [2024-11-26 04:10:54.121560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.124039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.124069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:52.812 [2024-11-26 04:10:54.124080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:14:52.812 [2024-11-26 04:10:54.124087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.124271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.124281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:52.812 [2024-11-26 04:10:54.124292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:14:52.812 [2024-11-26 04:10:54.124298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.145521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.145563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:52.812 [2024-11-26 04:10:54.145577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.184 ms 00:14:52.812 [2024-11-26 04:10:54.145587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.149351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.149385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:52.812 [2024-11-26 04:10:54.149400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:14:52.812 [2024-11-26 04:10:54.149409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.153067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.153098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:52.812 [2024-11-26 04:10:54.153118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:14:52.812 [2024-11-26 04:10:54.153129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.156336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.156368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:52.812 [2024-11-26 04:10:54.156380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:14:52.812 [2024-11-26 04:10:54.156388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.156438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.156448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:52.812 [2024-11-26 04:10:54.156459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:52.812 [2024-11-26 04:10:54.156466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.156607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.812 [2024-11-26 04:10:54.156616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:52.812 [2024-11-26 04:10:54.156633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:14:52.812 [2024-11-26 04:10:54.156640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.812 [2024-11-26 04:10:54.157422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:52.812 [2024-11-26 04:10:54.158389] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2426.470 ms, result 0 00:14:52.812 [2024-11-26 04:10:54.159142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:14:52.812 "name": "ftl0", 00:14:52.812 "uuid": "0a761f2a-7b57-4683-924b-3bfae6ab70d6" 00:14:52.812 } 00:14:52.812 p_thread 00:14:52.812 04:10:54 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:14:52.812 04:10:54 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:52.812 04:10:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.812 04:10:54 -- common/autotest_common.sh@899 -- # local i 00:14:52.812 04:10:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.812 04:10:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.812 04:10:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:52.812 04:10:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:52.812 [ 00:14:52.812 { 00:14:52.812 "name": "ftl0", 00:14:52.813 "aliases": [ 00:14:52.813 "0a761f2a-7b57-4683-924b-3bfae6ab70d6" 00:14:52.813 ], 00:14:52.813 "product_name": "FTL disk", 00:14:52.813 "block_size": 4096, 00:14:52.813 "num_blocks": 23592960, 00:14:52.813 "uuid": "0a761f2a-7b57-4683-924b-3bfae6ab70d6", 00:14:52.813 "assigned_rate_limits": { 00:14:52.813 "rw_ios_per_sec": 0, 00:14:52.813 "rw_mbytes_per_sec": 0, 00:14:52.813 "r_mbytes_per_sec": 0, 00:14:52.813 "w_mbytes_per_sec": 0 00:14:52.813 }, 00:14:52.813 "claimed": false, 00:14:52.813 "zoned": false, 00:14:52.813 "supported_io_types": { 00:14:52.813 "read": true, 00:14:52.813 "write": true, 00:14:52.813 "unmap": true, 00:14:52.813 "write_zeroes": true, 00:14:52.813 "flush": true, 00:14:52.813 "reset": false, 00:14:52.813 "compare": false, 00:14:52.813 "compare_and_write": false, 00:14:52.813 "abort": false, 00:14:52.813 "nvme_admin": false, 00:14:52.813 "nvme_io": false 00:14:52.813 }, 00:14:52.813 "driver_specific": { 00:14:52.813 "ftl": { 00:14:52.813 "base_bdev": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:52.813 "cache": "nvc0n1p0" 00:14:52.813 } 00:14:52.813 } 00:14:52.813 } 00:14:52.813 ] 00:14:52.813 04:10:54 -- common/autotest_common.sh@905 -- # return 0 00:14:52.813 04:10:54 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:14:52.813 04:10:54 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:53.071 04:10:54 -- ftl/trim.sh@56 -- # echo ']}' 00:14:53.071 04:10:54 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:14:53.330 04:10:54 -- ftl/trim.sh@59 -- # bdev_info='[ 00:14:53.330 { 00:14:53.330 "name": "ftl0", 00:14:53.330 "aliases": [ 00:14:53.330 "0a761f2a-7b57-4683-924b-3bfae6ab70d6" 00:14:53.330 ], 00:14:53.330 "product_name": "FTL disk", 00:14:53.330 "block_size": 4096, 00:14:53.330 "num_blocks": 23592960, 00:14:53.330 "uuid": "0a761f2a-7b57-4683-924b-3bfae6ab70d6", 00:14:53.330 "assigned_rate_limits": { 00:14:53.330 "rw_ios_per_sec": 0, 00:14:53.330 "rw_mbytes_per_sec": 0, 00:14:53.330 "r_mbytes_per_sec": 0, 00:14:53.330 "w_mbytes_per_sec": 0 00:14:53.330 }, 00:14:53.330 "claimed": false, 00:14:53.330 "zoned": false, 00:14:53.330 "supported_io_types": { 00:14:53.330 "read": true, 00:14:53.330 "write": true, 00:14:53.330 "unmap": true, 00:14:53.330 "write_zeroes": true, 00:14:53.330 "flush": true, 00:14:53.330 "reset": false, 00:14:53.330 "compare": false, 00:14:53.330 "compare_and_write": false, 00:14:53.330 "abort": false, 00:14:53.330 "nvme_admin": false, 00:14:53.330 "nvme_io": false 00:14:53.330 }, 00:14:53.330 "driver_specific": { 00:14:53.330 "ftl": { 00:14:53.330 "base_bdev": "86b3e0fd-18c2-43fe-a836-30d9fc02abf2", 00:14:53.330 "cache": "nvc0n1p0" 00:14:53.330 } 00:14:53.330 } 00:14:53.330 } 00:14:53.330 ]' 00:14:53.330 04:10:54 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:14:53.330 04:10:54 -- ftl/trim.sh@60 -- # nb=23592960 00:14:53.330 04:10:54 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:53.589 [2024-11-26 04:10:55.133318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.589 [2024-11-26 04:10:55.133370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:53.589 [2024-11-26 04:10:55.133385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:53.589 [2024-11-26 04:10:55.133394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.589 [2024-11-26 04:10:55.133431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:14:53.590 [2024-11-26 04:10:55.133904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.133927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:53.590 [2024-11-26 04:10:55.133941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:14:53.590 [2024-11-26 04:10:55.133948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.134448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.134458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:53.590 [2024-11-26 04:10:55.134471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:14:53.590 [2024-11-26 04:10:55.134479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.138146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.138168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:53.590 [2024-11-26 04:10:55.138179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:14:53.590 [2024-11-26 04:10:55.138187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.145073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.145201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:53.590 [2024-11-26 04:10:55.145220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:14:53.590 [2024-11-26 04:10:55.145228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.146974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.147006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:53.590 [2024-11-26 04:10:55.147017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:14:53.590 [2024-11-26 04:10:55.147024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.150925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.150955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:53.590 [2024-11-26 04:10:55.150968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:14:53.590 [2024-11-26 04:10:55.150976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.151168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.151177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:53.590 [2024-11-26 04:10:55.151186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:14:53.590 [2024-11-26 04:10:55.151204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.152991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.153022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:53.590 [2024-11-26 04:10:55.153032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:14:53.590 [2024-11-26 04:10:55.153039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.154389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.154418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:53.590 [2024-11-26 04:10:55.154429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:14:53.590 [2024-11-26 04:10:55.154435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.155433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.155463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:53.590 [2024-11-26 04:10:55.155475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:14:53.590 [2024-11-26 04:10:55.155482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.156383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.590 [2024-11-26 04:10:55.156412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:53.590 [2024-11-26 04:10:55.156423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:14:53.590 [2024-11-26 04:10:55.156429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.590 [2024-11-26 04:10:55.156469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:53.590 [2024-11-26 04:10:55.156482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:53.590 [2024-11-26 04:10:55.156781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.156995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:53.591 [2024-11-26 04:10:55.157343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:53.591 [2024-11-26 04:10:55.157352] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:14:53.591 [2024-11-26 04:10:55.157360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:53.592 [2024-11-26 04:10:55.157368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:53.592 [2024-11-26 04:10:55.157375] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:53.592 [2024-11-26 04:10:55.157386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:53.592 [2024-11-26 04:10:55.157393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:53.592 [2024-11-26 04:10:55.157402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:53.592 [2024-11-26 04:10:55.157411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:53.592 [2024-11-26 04:10:55.157418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:53.592 [2024-11-26 04:10:55.157425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:53.592 [2024-11-26 04:10:55.157433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.592 [2024-11-26 04:10:55.157440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:53.592 [2024-11-26 04:10:55.157450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:14:53.592 [2024-11-26 04:10:55.157456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.158968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.592 [2024-11-26 04:10:55.159075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:53.592 [2024-11-26 04:10:55.159093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:14:53.592 [2024-11-26 04:10:55.159100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.159165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:53.592 [2024-11-26 04:10:55.159173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:53.592 [2024-11-26 04:10:55.159184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:53.592 [2024-11-26 04:10:55.159202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.164428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.164462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:53.592 [2024-11-26 04:10:55.164474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.164481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.164577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.164587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:53.592 [2024-11-26 04:10:55.164596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.164603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.164662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.164671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:53.592 [2024-11-26 04:10:55.164680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.164687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.164721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.164729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:53.592 [2024-11-26 04:10:55.164739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.164746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.174291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.174498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:53.592 [2024-11-26 04:10:55.174529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.174537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:53.592 [2024-11-26 04:10:55.178302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:53.592 [2024-11-26 04:10:55.178364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:53.592 [2024-11-26 04:10:55.178449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:53.592 [2024-11-26 04:10:55.178564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:53.592 [2024-11-26 04:10:55.178660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:53.592 [2024-11-26 04:10:55.178742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:53.592 [2024-11-26 04:10:55.178810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:53.592 [2024-11-26 04:10:55.178821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:53.592 [2024-11-26 04:10:55.178828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:53.592 [2024-11-26 04:10:55.178990] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.658 ms, result 0 00:14:53.592 true 00:14:53.592 04:10:55 -- ftl/trim.sh@63 -- # killprocess 83053 00:14:53.592 04:10:55 -- common/autotest_common.sh@936 -- # '[' -z 83053 ']' 00:14:53.592 04:10:55 -- common/autotest_common.sh@940 -- # kill -0 83053 00:14:53.592 04:10:55 -- common/autotest_common.sh@941 -- # uname 00:14:53.592 04:10:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.592 04:10:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83053 00:14:53.592 killing process with pid 83053 00:14:53.592 04:10:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.592 04:10:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.592 04:10:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83053' 00:14:53.592 04:10:55 -- common/autotest_common.sh@955 -- # kill 83053 00:14:53.592 04:10:55 -- common/autotest_common.sh@960 -- # wait 83053 00:14:58.855 04:10:59 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:14:59.425 65536+0 records in 00:14:59.425 65536+0 records out 00:14:59.425 268435456 bytes (268 MB, 256 MiB) copied, 1.06926 s, 251 MB/s 00:14:59.425 04:11:01 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:59.425 [2024-11-26 04:11:01.065292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.425 [2024-11-26 04:11:01.065378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83247 ] 00:14:59.683 [2024-11-26 04:11:01.208744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.683 [2024-11-26 04:11:01.239895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.683 [2024-11-26 04:11:01.324338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:14:59.683 [2024-11-26 04:11:01.324406] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:14:59.946 [2024-11-26 04:11:01.473445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.473487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:59.946 [2024-11-26 04:11:01.473518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:59.946 [2024-11-26 04:11:01.473526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.475686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.475719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:59.946 [2024-11-26 04:11:01.475734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:14:59.946 [2024-11-26 04:11:01.475741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.475808] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:59.946 [2024-11-26 04:11:01.476027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:59.946 [2024-11-26 04:11:01.476039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.476047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:59.946 [2024-11-26 04:11:01.476055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:14:59.946 [2024-11-26 04:11:01.476062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.477157] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:14:59.946 [2024-11-26 04:11:01.479374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.479409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:14:59.946 [2024-11-26 04:11:01.479419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:14:59.946 [2024-11-26 04:11:01.479426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.479483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.479492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:14:59.946 [2024-11-26 04:11:01.479522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:59.946 [2024-11-26 04:11:01.479534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.484294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.484413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:59.946 [2024-11-26 04:11:01.484434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:14:59.946 [2024-11-26 04:11:01.484442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.484557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.484568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:59.946 [2024-11-26 04:11:01.484578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:14:59.946 [2024-11-26 04:11:01.484585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.484609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.484617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:59.946 [2024-11-26 04:11:01.484628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:59.946 [2024-11-26 04:11:01.484637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.484661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:14:59.946 [2024-11-26 04:11:01.485949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.485976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:59.946 [2024-11-26 04:11:01.485985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:14:59.946 [2024-11-26 04:11:01.485991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.486026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.486036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:59.946 [2024-11-26 04:11:01.486043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:14:59.946 [2024-11-26 04:11:01.486050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.486067] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:14:59.946 [2024-11-26 04:11:01.486088] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:14:59.946 [2024-11-26 04:11:01.486121] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:14:59.946 [2024-11-26 04:11:01.486135] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:14:59.946 [2024-11-26 04:11:01.486209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:59.946 [2024-11-26 04:11:01.486221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:59.946 [2024-11-26 04:11:01.486230] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:59.946 [2024-11-26 04:11:01.486239] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:59.946 [2024-11-26 04:11:01.486247] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:59.946 [2024-11-26 04:11:01.486259] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:14:59.946 [2024-11-26 04:11:01.486265] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:59.946 [2024-11-26 04:11:01.486274] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:59.946 [2024-11-26 04:11:01.486281] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:59.946 [2024-11-26 04:11:01.486287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.486297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:59.946 [2024-11-26 04:11:01.486304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:14:59.946 [2024-11-26 04:11:01.486314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.486379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.946 [2024-11-26 04:11:01.486387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:59.946 [2024-11-26 04:11:01.486394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:59.946 [2024-11-26 04:11:01.486405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.946 [2024-11-26 04:11:01.486478] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:59.946 [2024-11-26 04:11:01.486487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:59.946 [2024-11-26 04:11:01.486494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:59.946 [2024-11-26 04:11:01.486519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:59.946 [2024-11-26 04:11:01.486527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:59.946 [2024-11-26 04:11:01.486534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:59.946 [2024-11-26 04:11:01.486540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:14:59.946 [2024-11-26 04:11:01.486548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:59.946 [2024-11-26 04:11:01.486555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:14:59.946 [2024-11-26 04:11:01.486562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:59.946 [2024-11-26 04:11:01.486568] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:59.946 [2024-11-26 04:11:01.486580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:14:59.946 [2024-11-26 04:11:01.486586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:59.946 [2024-11-26 04:11:01.486593] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:59.946 [2024-11-26 04:11:01.486600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:14:59.946 [2024-11-26 04:11:01.486609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:59.946 [2024-11-26 04:11:01.486617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:59.946 [2024-11-26 04:11:01.486624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:14:59.946 [2024-11-26 04:11:01.486631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:59.946 [2024-11-26 04:11:01.486639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:59.946 [2024-11-26 04:11:01.486646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:14:59.946 [2024-11-26 04:11:01.486654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:59.946 [2024-11-26 04:11:01.486661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:59.947 [2024-11-26 04:11:01.486668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:59.947 [2024-11-26 04:11:01.486690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:59.947 [2024-11-26 04:11:01.486711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:59.947 [2024-11-26 04:11:01.486737] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:59.947 [2024-11-26 04:11:01.486759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:59.947 [2024-11-26 04:11:01.486772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:59.947 [2024-11-26 04:11:01.486778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:14:59.947 [2024-11-26 04:11:01.486785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:59.947 [2024-11-26 04:11:01.486790] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:59.947 [2024-11-26 04:11:01.486798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:59.947 [2024-11-26 04:11:01.486805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:59.947 [2024-11-26 04:11:01.486819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:59.947 [2024-11-26 04:11:01.486826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:59.947 [2024-11-26 04:11:01.486832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:59.947 [2024-11-26 04:11:01.486841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:59.947 [2024-11-26 04:11:01.486847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:59.947 [2024-11-26 04:11:01.486853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:59.947 [2024-11-26 04:11:01.486860] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:59.947 [2024-11-26 04:11:01.486868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:59.947 [2024-11-26 04:11:01.486878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:14:59.947 [2024-11-26 04:11:01.486885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:14:59.947 [2024-11-26 04:11:01.486892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:14:59.947 [2024-11-26 04:11:01.486898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:14:59.947 [2024-11-26 04:11:01.486905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:14:59.947 [2024-11-26 04:11:01.486912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:14:59.947 [2024-11-26 04:11:01.486918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:14:59.947 [2024-11-26 04:11:01.486925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:14:59.947 [2024-11-26 04:11:01.486932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:14:59.947 [2024-11-26 04:11:01.486938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:14:59.947 [2024-11-26 04:11:01.486945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:14:59.947 [2024-11-26 04:11:01.486953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:14:59.947 [2024-11-26 04:11:01.486961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:14:59.947 [2024-11-26 04:11:01.486967] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:59.947 [2024-11-26 04:11:01.486975] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:59.947 [2024-11-26 04:11:01.486983] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:59.947 [2024-11-26 04:11:01.486989] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:59.947 [2024-11-26 04:11:01.486996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:59.947 [2024-11-26 04:11:01.487004] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:59.947 [2024-11-26 04:11:01.487012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.487019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:59.947 [2024-11-26 04:11:01.487026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:14:59.947 [2024-11-26 04:11:01.487032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.493335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.493453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:59.947 [2024-11-26 04:11:01.493517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:14:59.947 [2024-11-26 04:11:01.493545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.493679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.493706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:59.947 [2024-11-26 04:11:01.493765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:14:59.947 [2024-11-26 04:11:01.493786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.510238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.510390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:59.947 [2024-11-26 04:11:01.510466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:14:59.947 [2024-11-26 04:11:01.510524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.510626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.510712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:59.947 [2024-11-26 04:11:01.510742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:59.947 [2024-11-26 04:11:01.510766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.511152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.511257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:59.947 [2024-11-26 04:11:01.511316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:14:59.947 [2024-11-26 04:11:01.511349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.511527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.511564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:59.947 [2024-11-26 04:11:01.511622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:14:59.947 [2024-11-26 04:11:01.511694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.517573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.517688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:59.947 [2024-11-26 04:11:01.517745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.831 ms 00:14:59.947 [2024-11-26 04:11:01.517802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.520628] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:14:59.947 [2024-11-26 04:11:01.520790] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:14:59.947 [2024-11-26 04:11:01.520860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.520886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:14:59.947 [2024-11-26 04:11:01.520911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:14:59.947 [2024-11-26 04:11:01.520944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.535574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.535604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:14:59.947 [2024-11-26 04:11:01.535621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.558 ms 00:14:59.947 [2024-11-26 04:11:01.535628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.537491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.537534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:14:59.947 [2024-11-26 04:11:01.537543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:14:59.947 [2024-11-26 04:11:01.537550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.539184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.947 [2024-11-26 04:11:01.539291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:14:59.947 [2024-11-26 04:11:01.539304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:14:59.947 [2024-11-26 04:11:01.539310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.947 [2024-11-26 04:11:01.539518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.539529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:59.948 [2024-11-26 04:11:01.539537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:14:59.948 [2024-11-26 04:11:01.539544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.556766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.556803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:14:59.948 [2024-11-26 04:11:01.556813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.200 ms 00:14:59.948 [2024-11-26 04:11:01.556821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.564096] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:59.948 [2024-11-26 04:11:01.578305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.578441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:59.948 [2024-11-26 04:11:01.578457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.413 ms 00:14:59.948 [2024-11-26 04:11:01.578465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.578552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.578563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:14:59.948 [2024-11-26 04:11:01.578577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:59.948 [2024-11-26 04:11:01.578588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.578636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.578644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:59.948 [2024-11-26 04:11:01.578652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:14:59.948 [2024-11-26 04:11:01.578659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.579810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.579837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:59.948 [2024-11-26 04:11:01.579850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:14:59.948 [2024-11-26 04:11:01.579858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.579888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.579895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:59.948 [2024-11-26 04:11:01.579902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:59.948 [2024-11-26 04:11:01.579912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.579942] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:14:59.948 [2024-11-26 04:11:01.579953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.579960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:14:59.948 [2024-11-26 04:11:01.579968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:14:59.948 [2024-11-26 04:11:01.579975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.584197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.584238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:59.948 [2024-11-26 04:11:01.584250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:14:59.948 [2024-11-26 04:11:01.584257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.584330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:59.948 [2024-11-26 04:11:01.584340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:59.948 [2024-11-26 04:11:01.584348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:14:59.948 [2024-11-26 04:11:01.584355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:59.948 [2024-11-26 04:11:01.585197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:59.948 [2024-11-26 04:11:01.586198] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.502 ms, result 0 00:14:59.948 [2024-11-26 04:11:01.587235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:59.948 [2024-11-26 04:11:01.595549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:00.975  [2024-11-26T04:11:03.686Z] Copying: 22/256 [MB] (22 MBps) [2024-11-26T04:11:04.629Z] Copying: 44/256 [MB] (22 MBps) [2024-11-26T04:11:06.017Z] Copying: 67/256 [MB] (22 MBps) [2024-11-26T04:11:06.951Z] Copying: 89/256 [MB] (22 MBps) [2024-11-26T04:11:07.885Z] Copying: 126/256 [MB] (36 MBps) [2024-11-26T04:11:08.822Z] Copying: 176/256 [MB] (50 MBps) [2024-11-26T04:11:09.756Z] Copying: 218/256 [MB] (42 MBps) [2024-11-26T04:11:09.756Z] Copying: 256/256 [MB] (average 32 MBps)[2024-11-26 04:11:09.478961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:07.988 [2024-11-26 04:11:09.480049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.480079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:07.988 [2024-11-26 04:11:09.480098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:07.988 [2024-11-26 04:11:09.480109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.480128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:07.988 [2024-11-26 04:11:09.480536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.480551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:07.988 [2024-11-26 04:11:09.480564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:15:07.988 [2024-11-26 04:11:09.480572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.481975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.482106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:07.988 [2024-11-26 04:11:09.482122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:15:07.988 [2024-11-26 04:11:09.482130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.488095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.488189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:07.988 [2024-11-26 04:11:09.488245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.945 ms 00:15:07.988 [2024-11-26 04:11:09.488267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.495169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.495267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:07.988 [2024-11-26 04:11:09.495328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:15:07.988 [2024-11-26 04:11:09.495350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.497019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.497117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:07.988 [2024-11-26 04:11:09.497169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:15:07.988 [2024-11-26 04:11:09.497190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.500730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.500837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:07.988 [2024-11-26 04:11:09.500902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:15:07.988 [2024-11-26 04:11:09.500923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.501057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.501162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:07.988 [2024-11-26 04:11:09.501186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:15:07.988 [2024-11-26 04:11:09.501204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.988 [2024-11-26 04:11:09.503160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.988 [2024-11-26 04:11:09.503254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:07.989 [2024-11-26 04:11:09.503302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:15:07.989 [2024-11-26 04:11:09.503323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.989 [2024-11-26 04:11:09.504715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.989 [2024-11-26 04:11:09.504810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:07.989 [2024-11-26 04:11:09.504911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:15:07.989 [2024-11-26 04:11:09.504932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.989 [2024-11-26 04:11:09.505971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.989 [2024-11-26 04:11:09.506061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:07.989 [2024-11-26 04:11:09.506107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:15:07.989 [2024-11-26 04:11:09.506128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.989 [2024-11-26 04:11:09.507083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.989 [2024-11-26 04:11:09.507171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:07.989 [2024-11-26 04:11:09.507220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:15:07.989 [2024-11-26 04:11:09.507240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.989 [2024-11-26 04:11:09.507277] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:07.989 [2024-11-26 04:11:09.507307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.507986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.508989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:07.989 [2024-11-26 04:11:09.509794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.509824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.509852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.509930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.509959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.509987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:07.990 [2024-11-26 04:11:09.510456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:07.990 [2024-11-26 04:11:09.510464] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:07.990 [2024-11-26 04:11:09.510472] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:07.990 [2024-11-26 04:11:09.510479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:07.990 [2024-11-26 04:11:09.510486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:07.990 [2024-11-26 04:11:09.510493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:07.990 [2024-11-26 04:11:09.510515] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:07.990 [2024-11-26 04:11:09.510530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:07.990 [2024-11-26 04:11:09.510537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:07.990 [2024-11-26 04:11:09.510543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:07.990 [2024-11-26 04:11:09.510549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:07.990 [2024-11-26 04:11:09.510556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.990 [2024-11-26 04:11:09.510567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:07.990 [2024-11-26 04:11:09.510575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:15:07.990 [2024-11-26 04:11:09.510589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.990 [2024-11-26 04:11:09.511905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.990 [2024-11-26 04:11:09.511925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:07.990 [2024-11-26 04:11:09.511933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:15:07.990 [2024-11-26 04:11:09.511940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.990 [2024-11-26 04:11:09.511991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:07.991 [2024-11-26 04:11:09.512003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:07.991 [2024-11-26 04:11:09.512011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:07.991 [2024-11-26 04:11:09.512018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.516813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.516915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:07.991 [2024-11-26 04:11:09.516929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.516937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.516999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.517012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:07.991 [2024-11-26 04:11:09.517019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.517027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.517062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.517070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:07.991 [2024-11-26 04:11:09.517077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.517084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.517100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.517108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:07.991 [2024-11-26 04:11:09.517118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.517125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.525251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.525287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:07.991 [2024-11-26 04:11:09.525297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.525304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.528814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.528842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:07.991 [2024-11-26 04:11:09.528857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.528864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.528887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.528894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:07.991 [2024-11-26 04:11:09.528902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.528909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.528936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.528944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:07.991 [2024-11-26 04:11:09.528951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.528958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.529018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.529027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:07.991 [2024-11-26 04:11:09.529041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.529048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.529079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.529087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:07.991 [2024-11-26 04:11:09.529094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.529101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.529138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.529146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:07.991 [2024-11-26 04:11:09.529153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.529160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.529199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:07.991 [2024-11-26 04:11:09.529208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:07.991 [2024-11-26 04:11:09.529215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:07.991 [2024-11-26 04:11:09.529222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:07.991 [2024-11-26 04:11:09.529353] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.278 ms, result 0 00:15:08.250 00:15:08.250 00:15:08.250 04:11:09 -- ftl/trim.sh@72 -- # svcpid=83344 00:15:08.250 04:11:09 -- ftl/trim.sh@73 -- # waitforlisten 83344 00:15:08.250 04:11:09 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:08.250 04:11:09 -- common/autotest_common.sh@829 -- # '[' -z 83344 ']' 00:15:08.250 04:11:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.250 04:11:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.250 04:11:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.250 04:11:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.250 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:15:08.508 [2024-11-26 04:11:10.026022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.508 [2024-11-26 04:11:10.026303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83344 ] 00:15:08.508 [2024-11-26 04:11:10.173119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.508 [2024-11-26 04:11:10.202842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.508 [2024-11-26 04:11:10.203201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.447 04:11:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.447 04:11:10 -- common/autotest_common.sh@862 -- # return 0 00:15:09.447 04:11:10 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:09.447 [2024-11-26 04:11:11.035557] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:09.447 [2024-11-26 04:11:11.035617] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:09.447 [2024-11-26 04:11:11.199708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.447 [2024-11-26 04:11:11.199751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:09.447 [2024-11-26 04:11:11.199769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:09.447 [2024-11-26 04:11:11.199777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.447 [2024-11-26 04:11:11.201953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.447 [2024-11-26 04:11:11.201987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:09.447 [2024-11-26 04:11:11.201998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:15:09.447 [2024-11-26 04:11:11.202006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.447 [2024-11-26 04:11:11.202074] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:09.447 [2024-11-26 04:11:11.202298] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:09.447 [2024-11-26 04:11:11.202313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.447 [2024-11-26 04:11:11.202321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:09.447 [2024-11-26 04:11:11.202330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:15:09.447 [2024-11-26 04:11:11.202337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.447 [2024-11-26 04:11:11.203498] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:09.447 [2024-11-26 04:11:11.206284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.447 [2024-11-26 04:11:11.206317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:09.447 [2024-11-26 04:11:11.206327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:15:09.447 [2024-11-26 04:11:11.206335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.447 [2024-11-26 04:11:11.206387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.447 [2024-11-26 04:11:11.206400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:09.447 [2024-11-26 04:11:11.206411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:09.447 [2024-11-26 04:11:11.206419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.211060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.211090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:09.710 [2024-11-26 04:11:11.211100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:15:09.710 [2024-11-26 04:11:11.211118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.211212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.211225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:09.710 [2024-11-26 04:11:11.211234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:09.710 [2024-11-26 04:11:11.211243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.211271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.211280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:09.710 [2024-11-26 04:11:11.211287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:09.710 [2024-11-26 04:11:11.211295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.211323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:09.710 [2024-11-26 04:11:11.212636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.212755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:09.710 [2024-11-26 04:11:11.212779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:15:09.710 [2024-11-26 04:11:11.212787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.212825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.212832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:09.710 [2024-11-26 04:11:11.212842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:09.710 [2024-11-26 04:11:11.212849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.212871] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:09.710 [2024-11-26 04:11:11.212892] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:09.710 [2024-11-26 04:11:11.212930] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:09.710 [2024-11-26 04:11:11.212944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:09.710 [2024-11-26 04:11:11.213018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:09.710 [2024-11-26 04:11:11.213028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:09.710 [2024-11-26 04:11:11.213039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:09.710 [2024-11-26 04:11:11.213048] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:09.710 [2024-11-26 04:11:11.213060] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:09.710 [2024-11-26 04:11:11.213068] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:09.710 [2024-11-26 04:11:11.213078] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:09.710 [2024-11-26 04:11:11.213086] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:09.710 [2024-11-26 04:11:11.213094] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:09.710 [2024-11-26 04:11:11.213101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.213110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:09.710 [2024-11-26 04:11:11.213117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:15:09.710 [2024-11-26 04:11:11.213126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.213188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.710 [2024-11-26 04:11:11.213197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:09.710 [2024-11-26 04:11:11.213204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:09.710 [2024-11-26 04:11:11.213217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.710 [2024-11-26 04:11:11.213291] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:09.710 [2024-11-26 04:11:11.213301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:09.710 [2024-11-26 04:11:11.213309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.710 [2024-11-26 04:11:11.213321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.710 [2024-11-26 04:11:11.213329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:09.710 [2024-11-26 04:11:11.213338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:09.710 [2024-11-26 04:11:11.213346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:09.710 [2024-11-26 04:11:11.213355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:09.710 [2024-11-26 04:11:11.213363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:09.710 [2024-11-26 04:11:11.213371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.710 [2024-11-26 04:11:11.213379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:09.710 [2024-11-26 04:11:11.213388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:09.710 [2024-11-26 04:11:11.213395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.710 [2024-11-26 04:11:11.213404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:09.710 [2024-11-26 04:11:11.213411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:09.710 [2024-11-26 04:11:11.213421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.710 [2024-11-26 04:11:11.213428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:09.710 [2024-11-26 04:11:11.213438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:09.711 [2024-11-26 04:11:11.213445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:09.711 [2024-11-26 04:11:11.213463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:09.711 [2024-11-26 04:11:11.213472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:09.711 [2024-11-26 04:11:11.213493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213527] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:09.711 [2024-11-26 04:11:11.213534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:09.711 [2024-11-26 04:11:11.213561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:09.711 [2024-11-26 04:11:11.213584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:09.711 [2024-11-26 04:11:11.213610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.711 [2024-11-26 04:11:11.213626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:09.711 [2024-11-26 04:11:11.213633] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:09.711 [2024-11-26 04:11:11.213642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.711 [2024-11-26 04:11:11.213649] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:09.711 [2024-11-26 04:11:11.213663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:09.711 [2024-11-26 04:11:11.213671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.711 [2024-11-26 04:11:11.213691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:09.711 [2024-11-26 04:11:11.213700] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:09.711 [2024-11-26 04:11:11.213708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:09.711 [2024-11-26 04:11:11.213717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:09.711 [2024-11-26 04:11:11.213725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:09.711 [2024-11-26 04:11:11.213734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:09.711 [2024-11-26 04:11:11.213742] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:09.711 [2024-11-26 04:11:11.213759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.711 [2024-11-26 04:11:11.213768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:09.711 [2024-11-26 04:11:11.213778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:09.711 [2024-11-26 04:11:11.213786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:09.711 [2024-11-26 04:11:11.213796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:09.711 [2024-11-26 04:11:11.213804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:09.711 [2024-11-26 04:11:11.213814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:09.711 [2024-11-26 04:11:11.213822] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:09.711 [2024-11-26 04:11:11.213831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:09.711 [2024-11-26 04:11:11.213839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:09.711 [2024-11-26 04:11:11.213848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:09.711 [2024-11-26 04:11:11.213855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:09.711 [2024-11-26 04:11:11.213864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:09.711 [2024-11-26 04:11:11.213871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:09.711 [2024-11-26 04:11:11.213879] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:09.711 [2024-11-26 04:11:11.213887] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.711 [2024-11-26 04:11:11.213898] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:09.711 [2024-11-26 04:11:11.213905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:09.711 [2024-11-26 04:11:11.213913] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:09.711 [2024-11-26 04:11:11.213920] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:09.711 [2024-11-26 04:11:11.213929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.213935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:09.711 [2024-11-26 04:11:11.213944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:15:09.711 [2024-11-26 04:11:11.213950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.219771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.219877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:09.711 [2024-11-26 04:11:11.219897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.772 ms 00:15:09.711 [2024-11-26 04:11:11.219905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.220014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.220023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:09.711 [2024-11-26 04:11:11.220032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:09.711 [2024-11-26 04:11:11.220039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.228922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.228953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:09.711 [2024-11-26 04:11:11.228964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.859 ms 00:15:09.711 [2024-11-26 04:11:11.228971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.229018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.229026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:09.711 [2024-11-26 04:11:11.229036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:09.711 [2024-11-26 04:11:11.229046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.229346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.229365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:09.711 [2024-11-26 04:11:11.229377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:15:09.711 [2024-11-26 04:11:11.229384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.229497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.229521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:09.711 [2024-11-26 04:11:11.229531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:09.711 [2024-11-26 04:11:11.229538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.234731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.234834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:09.711 [2024-11-26 04:11:11.234851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:15:09.711 [2024-11-26 04:11:11.234860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.237553] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:09.711 [2024-11-26 04:11:11.237581] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:09.711 [2024-11-26 04:11:11.237593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.237605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:09.711 [2024-11-26 04:11:11.237614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:15:09.711 [2024-11-26 04:11:11.237621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.251935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.251977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:09.711 [2024-11-26 04:11:11.251995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.272 ms 00:15:09.711 [2024-11-26 04:11:11.252008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.253748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.711 [2024-11-26 04:11:11.253775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:09.711 [2024-11-26 04:11:11.253787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:15:09.711 [2024-11-26 04:11:11.253794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.711 [2024-11-26 04:11:11.255805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.255832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:09.712 [2024-11-26 04:11:11.255842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:15:09.712 [2024-11-26 04:11:11.255848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.256062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.256071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:09.712 [2024-11-26 04:11:11.256080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:15:09.712 [2024-11-26 04:11:11.256087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.273965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.274012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:09.712 [2024-11-26 04:11:11.274026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.840 ms 00:15:09.712 [2024-11-26 04:11:11.274035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.281474] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:09.712 [2024-11-26 04:11:11.295050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.295088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:09.712 [2024-11-26 04:11:11.295099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.939 ms 00:15:09.712 [2024-11-26 04:11:11.295108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.295172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.295182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:09.712 [2024-11-26 04:11:11.295190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:09.712 [2024-11-26 04:11:11.295199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.295244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.295254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:09.712 [2024-11-26 04:11:11.295262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:09.712 [2024-11-26 04:11:11.295270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.296419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.296450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:09.712 [2024-11-26 04:11:11.296460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:15:09.712 [2024-11-26 04:11:11.296469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.296497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.296523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:09.712 [2024-11-26 04:11:11.296531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:09.712 [2024-11-26 04:11:11.296540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.296572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:09.712 [2024-11-26 04:11:11.296585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.296592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:09.712 [2024-11-26 04:11:11.296606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:09.712 [2024-11-26 04:11:11.296613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.300285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.300317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:09.712 [2024-11-26 04:11:11.300327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:15:09.712 [2024-11-26 04:11:11.300338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.300408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.712 [2024-11-26 04:11:11.300417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:09.712 [2024-11-26 04:11:11.300428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:09.712 [2024-11-26 04:11:11.300435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.712 [2024-11-26 04:11:11.301329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:09.712 [2024-11-26 04:11:11.302658] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.370 ms, result 0 00:15:09.712 [2024-11-26 04:11:11.304574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:09.712 Some configs were skipped because the RPC state that can call them passed over. 00:15:09.712 04:11:11 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:09.972 [2024-11-26 04:11:11.516458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.972 [2024-11-26 04:11:11.516633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:09.972 [2024-11-26 04:11:11.516689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.914 ms 00:15:09.972 [2024-11-26 04:11:11.516714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.972 [2024-11-26 04:11:11.516764] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.221 ms, result 0 00:15:09.972 true 00:15:09.972 04:11:11 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:09.972 [2024-11-26 04:11:11.716159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.972 [2024-11-26 04:11:11.716310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:09.972 [2024-11-26 04:11:11.716364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:15:09.972 [2024-11-26 04:11:11.716387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.972 [2024-11-26 04:11:11.716438] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.678 ms, result 0 00:15:09.972 true 00:15:10.235 04:11:11 -- ftl/trim.sh@81 -- # killprocess 83344 00:15:10.235 04:11:11 -- common/autotest_common.sh@936 -- # '[' -z 83344 ']' 00:15:10.235 04:11:11 -- common/autotest_common.sh@940 -- # kill -0 83344 00:15:10.235 04:11:11 -- common/autotest_common.sh@941 -- # uname 00:15:10.235 04:11:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.235 04:11:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83344 00:15:10.235 killing process with pid 83344 00:15:10.235 04:11:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.235 04:11:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.235 04:11:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83344' 00:15:10.235 04:11:11 -- common/autotest_common.sh@955 -- # kill 83344 00:15:10.235 04:11:11 -- common/autotest_common.sh@960 -- # wait 83344 00:15:10.235 [2024-11-26 04:11:11.859834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.859883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:10.235 [2024-11-26 04:11:11.859896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:10.235 [2024-11-26 04:11:11.859909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.859931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:10.235 [2024-11-26 04:11:11.860355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.860375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:10.235 [2024-11-26 04:11:11.860386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:15:10.235 [2024-11-26 04:11:11.860394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.860886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.860930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:10.235 [2024-11-26 04:11:11.860953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:15:10.235 [2024-11-26 04:11:11.860973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.865480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.865590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:10.235 [2024-11-26 04:11:11.865643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:15:10.235 [2024-11-26 04:11:11.865667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.872707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.872822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:10.235 [2024-11-26 04:11:11.872873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.992 ms 00:15:10.235 [2024-11-26 04:11:11.872896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.875311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.875403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:10.235 [2024-11-26 04:11:11.875449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:15:10.235 [2024-11-26 04:11:11.875471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.879076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.879172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:10.235 [2024-11-26 04:11:11.879668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:15:10.235 [2024-11-26 04:11:11.879731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.880171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.880219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:10.235 [2024-11-26 04:11:11.880252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:15:10.235 [2024-11-26 04:11:11.880279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.883902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.883979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:10.235 [2024-11-26 04:11:11.884021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:15:10.235 [2024-11-26 04:11:11.884042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.886533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.886564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:10.235 [2024-11-26 04:11:11.886574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.400 ms 00:15:10.235 [2024-11-26 04:11:11.886581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.888371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.888404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:10.235 [2024-11-26 04:11:11.888415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:15:10.235 [2024-11-26 04:11:11.888421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.890211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.235 [2024-11-26 04:11:11.890324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:10.235 [2024-11-26 04:11:11.890342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:15:10.235 [2024-11-26 04:11:11.890350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.235 [2024-11-26 04:11:11.890384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:10.235 [2024-11-26 04:11:11.890399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:10.235 [2024-11-26 04:11:11.890643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.890991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:10.236 [2024-11-26 04:11:11.891356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:10.236 [2024-11-26 04:11:11.891366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:10.236 [2024-11-26 04:11:11.891375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:10.236 [2024-11-26 04:11:11.891384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:10.236 [2024-11-26 04:11:11.891392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:10.236 [2024-11-26 04:11:11.891402] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:10.236 [2024-11-26 04:11:11.891409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:10.236 [2024-11-26 04:11:11.891419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:10.236 [2024-11-26 04:11:11.891429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:10.236 [2024-11-26 04:11:11.891437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:10.236 [2024-11-26 04:11:11.891444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:10.236 [2024-11-26 04:11:11.891454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.236 [2024-11-26 04:11:11.891462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:10.236 [2024-11-26 04:11:11.891475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:15:10.236 [2024-11-26 04:11:11.891482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.236 [2024-11-26 04:11:11.892953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.236 [2024-11-26 04:11:11.892979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:10.236 [2024-11-26 04:11:11.892990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:15:10.237 [2024-11-26 04:11:11.892998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.893084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:10.237 [2024-11-26 04:11:11.893093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:10.237 [2024-11-26 04:11:11.893105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:10.237 [2024-11-26 04:11:11.893112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.898638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.898744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:10.237 [2024-11-26 04:11:11.898794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.898818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.898896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.898918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:10.237 [2024-11-26 04:11:11.898940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.898958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.899010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.899070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:10.237 [2024-11-26 04:11:11.899095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.899114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.899149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.899192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:10.237 [2024-11-26 04:11:11.899217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.899236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.908653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.908792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:10.237 [2024-11-26 04:11:11.908845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.908869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.912590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.912689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:10.237 [2024-11-26 04:11:11.912708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.912715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.912744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.912752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:10.237 [2024-11-26 04:11:11.912761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.912768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.912813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.912822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:10.237 [2024-11-26 04:11:11.912831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.912838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.912905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.912914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:10.237 [2024-11-26 04:11:11.912922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.912932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.912972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.912981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:10.237 [2024-11-26 04:11:11.912994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.913001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.913039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.913051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:10.237 [2024-11-26 04:11:11.913060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.913067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.913113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:10.237 [2024-11-26 04:11:11.913124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:10.237 [2024-11-26 04:11:11.913134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:10.237 [2024-11-26 04:11:11.913140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:10.237 [2024-11-26 04:11:11.913268] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.415 ms, result 0 00:15:10.497 04:11:12 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:10.497 04:11:12 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:10.497 [2024-11-26 04:11:12.145708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.497 [2024-11-26 04:11:12.145830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83380 ] 00:15:10.758 [2024-11-26 04:11:12.291478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.758 [2024-11-26 04:11:12.322945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.758 [2024-11-26 04:11:12.407757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:10.758 [2024-11-26 04:11:12.407826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:11.021 [2024-11-26 04:11:12.557141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.557188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:11.021 [2024-11-26 04:11:12.557203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:11.021 [2024-11-26 04:11:12.557211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.559409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.559444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:11.021 [2024-11-26 04:11:12.559454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:15:11.021 [2024-11-26 04:11:12.559461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.559542] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:11.021 [2024-11-26 04:11:12.559770] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:11.021 [2024-11-26 04:11:12.559783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.559791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:11.021 [2024-11-26 04:11:12.559799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:15:11.021 [2024-11-26 04:11:12.559807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.560920] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:11.021 [2024-11-26 04:11:12.563484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.563531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:11.021 [2024-11-26 04:11:12.563540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:15:11.021 [2024-11-26 04:11:12.563554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.563609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.563622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:11.021 [2024-11-26 04:11:12.563630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:11.021 [2024-11-26 04:11:12.563639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.568496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.568532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:11.021 [2024-11-26 04:11:12.568540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.819 ms 00:15:11.021 [2024-11-26 04:11:12.568547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.568642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.568651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:11.021 [2024-11-26 04:11:12.568664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:11.021 [2024-11-26 04:11:12.568672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.568699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.568707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:11.021 [2024-11-26 04:11:12.568714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:11.021 [2024-11-26 04:11:12.568726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.568748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:11.021 [2024-11-26 04:11:12.570076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.570103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:11.021 [2024-11-26 04:11:12.570112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:15:11.021 [2024-11-26 04:11:12.570118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.570157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.021 [2024-11-26 04:11:12.570171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:11.021 [2024-11-26 04:11:12.570179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:11.021 [2024-11-26 04:11:12.570185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.021 [2024-11-26 04:11:12.570205] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:11.022 [2024-11-26 04:11:12.570222] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:11.022 [2024-11-26 04:11:12.570256] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:11.022 [2024-11-26 04:11:12.570270] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:11.022 [2024-11-26 04:11:12.570342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:11.022 [2024-11-26 04:11:12.570353] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:11.022 [2024-11-26 04:11:12.570363] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:11.022 [2024-11-26 04:11:12.570372] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570381] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570388] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:11.022 [2024-11-26 04:11:12.570395] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:11.022 [2024-11-26 04:11:12.570404] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:11.022 [2024-11-26 04:11:12.570414] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:11.022 [2024-11-26 04:11:12.570421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.022 [2024-11-26 04:11:12.570428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:11.022 [2024-11-26 04:11:12.570435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:15:11.022 [2024-11-26 04:11:12.570442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.022 [2024-11-26 04:11:12.570521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.022 [2024-11-26 04:11:12.570533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:11.022 [2024-11-26 04:11:12.570540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:11.022 [2024-11-26 04:11:12.570554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.022 [2024-11-26 04:11:12.570628] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:11.022 [2024-11-26 04:11:12.570637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:11.022 [2024-11-26 04:11:12.570645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:11.022 [2024-11-26 04:11:12.570669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:11.022 [2024-11-26 04:11:12.570688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:11.022 [2024-11-26 04:11:12.570701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:11.022 [2024-11-26 04:11:12.570712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:11.022 [2024-11-26 04:11:12.570718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:11.022 [2024-11-26 04:11:12.570725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:11.022 [2024-11-26 04:11:12.570733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:11.022 [2024-11-26 04:11:12.570741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:11.022 [2024-11-26 04:11:12.570756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:11.022 [2024-11-26 04:11:12.570763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:11.022 [2024-11-26 04:11:12.570779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:11.022 [2024-11-26 04:11:12.570786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570794] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:11.022 [2024-11-26 04:11:12.570801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:11.022 [2024-11-26 04:11:12.570823] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:11.022 [2024-11-26 04:11:12.570844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:11.022 [2024-11-26 04:11:12.570869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:11.022 [2024-11-26 04:11:12.570891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:11.022 [2024-11-26 04:11:12.570905] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:11.022 [2024-11-26 04:11:12.570913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:11.022 [2024-11-26 04:11:12.570920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:11.022 [2024-11-26 04:11:12.570926] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:11.022 [2024-11-26 04:11:12.570934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:11.022 [2024-11-26 04:11:12.570942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:11.022 [2024-11-26 04:11:12.570950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.022 [2024-11-26 04:11:12.570958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:11.022 [2024-11-26 04:11:12.570966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:11.022 [2024-11-26 04:11:12.570973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:11.022 [2024-11-26 04:11:12.570983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:11.022 [2024-11-26 04:11:12.570990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:11.022 [2024-11-26 04:11:12.570998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:11.022 [2024-11-26 04:11:12.571006] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:11.022 [2024-11-26 04:11:12.571017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:11.022 [2024-11-26 04:11:12.571030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:11.022 [2024-11-26 04:11:12.571038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:11.022 [2024-11-26 04:11:12.571046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:11.022 [2024-11-26 04:11:12.571054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:11.022 [2024-11-26 04:11:12.571062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:11.022 [2024-11-26 04:11:12.571071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:11.022 [2024-11-26 04:11:12.571078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:11.022 [2024-11-26 04:11:12.571086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:11.022 [2024-11-26 04:11:12.571094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:11.022 [2024-11-26 04:11:12.571102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:11.022 [2024-11-26 04:11:12.571111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:11.022 [2024-11-26 04:11:12.571120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:11.022 [2024-11-26 04:11:12.571129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:11.022 [2024-11-26 04:11:12.571137] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:11.022 [2024-11-26 04:11:12.571146] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:11.022 [2024-11-26 04:11:12.571154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:11.022 [2024-11-26 04:11:12.571161] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:11.022 [2024-11-26 04:11:12.571168] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:11.022 [2024-11-26 04:11:12.571175] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:11.022 [2024-11-26 04:11:12.571182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.022 [2024-11-26 04:11:12.571192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:11.022 [2024-11-26 04:11:12.571199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:15:11.022 [2024-11-26 04:11:12.571205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.022 [2024-11-26 04:11:12.577219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.022 [2024-11-26 04:11:12.577364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:11.022 [2024-11-26 04:11:12.577380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.974 ms 00:15:11.023 [2024-11-26 04:11:12.577392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.577521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.577532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:11.023 [2024-11-26 04:11:12.577540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:15:11.023 [2024-11-26 04:11:12.577547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.594193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.594232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:11.023 [2024-11-26 04:11:12.594247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.623 ms 00:15:11.023 [2024-11-26 04:11:12.594256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.594327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.594343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:11.023 [2024-11-26 04:11:12.594352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:11.023 [2024-11-26 04:11:12.594359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.594696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.594718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:11.023 [2024-11-26 04:11:12.594727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:15:11.023 [2024-11-26 04:11:12.594738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.594856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.594866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:11.023 [2024-11-26 04:11:12.594875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:15:11.023 [2024-11-26 04:11:12.594883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.600240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.600379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:11.023 [2024-11-26 04:11:12.600395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.336 ms 00:15:11.023 [2024-11-26 04:11:12.600403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.602929] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:11.023 [2024-11-26 04:11:12.602963] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:11.023 [2024-11-26 04:11:12.602973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.602981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:11.023 [2024-11-26 04:11:12.602995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.479 ms 00:15:11.023 [2024-11-26 04:11:12.603002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.619863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.619896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:11.023 [2024-11-26 04:11:12.619906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.816 ms 00:15:11.023 [2024-11-26 04:11:12.619913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.622045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.622076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:11.023 [2024-11-26 04:11:12.622084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:15:11.023 [2024-11-26 04:11:12.622091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.624109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.624137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:11.023 [2024-11-26 04:11:12.624145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:15:11.023 [2024-11-26 04:11:12.624152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.624353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.624363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:11.023 [2024-11-26 04:11:12.624371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:11.023 [2024-11-26 04:11:12.624378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.642382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.642423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:11.023 [2024-11-26 04:11:12.642434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.983 ms 00:15:11.023 [2024-11-26 04:11:12.642441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.649786] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:11.023 [2024-11-26 04:11:12.663917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.663953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:11.023 [2024-11-26 04:11:12.663964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.379 ms 00:15:11.023 [2024-11-26 04:11:12.663972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.664052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.664063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:11.023 [2024-11-26 04:11:12.664072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:11.023 [2024-11-26 04:11:12.664079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.664125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.664133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:11.023 [2024-11-26 04:11:12.664144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:11.023 [2024-11-26 04:11:12.664151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.665379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.665412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:11.023 [2024-11-26 04:11:12.665421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:15:11.023 [2024-11-26 04:11:12.665428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.665457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.665468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:11.023 [2024-11-26 04:11:12.665476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:11.023 [2024-11-26 04:11:12.665486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.665531] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:11.023 [2024-11-26 04:11:12.665541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.665548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:11.023 [2024-11-26 04:11:12.665557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:11.023 [2024-11-26 04:11:12.665565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.669616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.669648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:11.023 [2024-11-26 04:11:12.669664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.030 ms 00:15:11.023 [2024-11-26 04:11:12.669671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.669740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.023 [2024-11-26 04:11:12.669750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:11.023 [2024-11-26 04:11:12.669758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:11.023 [2024-11-26 04:11:12.669769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.023 [2024-11-26 04:11:12.670525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:11.023 [2024-11-26 04:11:12.671527] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.106 ms, result 0 00:15:11.023 [2024-11-26 04:11:12.672448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:11.023 [2024-11-26 04:11:12.681042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:11.958  [2024-11-26T04:11:15.099Z] Copying: 38/256 [MB] (38 MBps) [2024-11-26T04:11:16.033Z] Copying: 82/256 [MB] (43 MBps) [2024-11-26T04:11:16.967Z] Copying: 125/256 [MB] (43 MBps) [2024-11-26T04:11:17.898Z] Copying: 169/256 [MB] (43 MBps) [2024-11-26T04:11:18.832Z] Copying: 214/256 [MB] (44 MBps) [2024-11-26T04:11:18.832Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-26 04:11:18.615377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:17.064 [2024-11-26 04:11:18.616951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.617013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:17.064 [2024-11-26 04:11:18.617034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:17.064 [2024-11-26 04:11:18.617049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.617087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:17.064 [2024-11-26 04:11:18.617516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.617535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:17.064 [2024-11-26 04:11:18.617549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:15:17.064 [2024-11-26 04:11:18.617556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.617814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.617828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:17.064 [2024-11-26 04:11:18.617837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:15:17.064 [2024-11-26 04:11:18.617847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.621870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.621889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:17.064 [2024-11-26 04:11:18.621903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.007 ms 00:15:17.064 [2024-11-26 04:11:18.621911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.628761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.628909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:17.064 [2024-11-26 04:11:18.628925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.808 ms 00:15:17.064 [2024-11-26 04:11:18.628932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.630744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.630786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:17.064 [2024-11-26 04:11:18.630798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:15:17.064 [2024-11-26 04:11:18.630805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.634647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.634678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:17.064 [2024-11-26 04:11:18.634688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:15:17.064 [2024-11-26 04:11:18.634705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.634824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.634833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:17.064 [2024-11-26 04:11:18.634846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:17.064 [2024-11-26 04:11:18.634853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.636439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.636576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:17.064 [2024-11-26 04:11:18.636590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:15:17.064 [2024-11-26 04:11:18.636597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.638013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.638038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:17.064 [2024-11-26 04:11:18.638046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:15:17.064 [2024-11-26 04:11:18.638052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.639071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.639092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:17.064 [2024-11-26 04:11:18.639100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:15:17.064 [2024-11-26 04:11:18.639106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.640117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.064 [2024-11-26 04:11:18.640145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:17.064 [2024-11-26 04:11:18.640154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:15:17.064 [2024-11-26 04:11:18.640162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.064 [2024-11-26 04:11:18.640190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:17.064 [2024-11-26 04:11:18.640210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:17.064 [2024-11-26 04:11:18.640345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:17.065 [2024-11-26 04:11:18.640887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:17.066 [2024-11-26 04:11:18.640995] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:17.066 [2024-11-26 04:11:18.641003] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:17.066 [2024-11-26 04:11:18.641011] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:17.066 [2024-11-26 04:11:18.641018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:17.066 [2024-11-26 04:11:18.641025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:17.066 [2024-11-26 04:11:18.641032] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:17.066 [2024-11-26 04:11:18.641045] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:17.066 [2024-11-26 04:11:18.641052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:17.066 [2024-11-26 04:11:18.641059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:17.066 [2024-11-26 04:11:18.641065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:17.066 [2024-11-26 04:11:18.641071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:17.066 [2024-11-26 04:11:18.641078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.066 [2024-11-26 04:11:18.641087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:17.066 [2024-11-26 04:11:18.641095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:15:17.066 [2024-11-26 04:11:18.641102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.642390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.066 [2024-11-26 04:11:18.642410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:17.066 [2024-11-26 04:11:18.642418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:15:17.066 [2024-11-26 04:11:18.642425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.642494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.066 [2024-11-26 04:11:18.642526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:17.066 [2024-11-26 04:11:18.642534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:17.066 [2024-11-26 04:11:18.642541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.647312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.647434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:17.066 [2024-11-26 04:11:18.647448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.647456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.647540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.647549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:17.066 [2024-11-26 04:11:18.647556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.647564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.647600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.647609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:17.066 [2024-11-26 04:11:18.647615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.647623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.647639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.647649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:17.066 [2024-11-26 04:11:18.647656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.647663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.655823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.655862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:17.066 [2024-11-26 04:11:18.655879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.655887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:17.066 [2024-11-26 04:11:18.659425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:17.066 [2024-11-26 04:11:18.659469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:17.066 [2024-11-26 04:11:18.659538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:17.066 [2024-11-26 04:11:18.659628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:17.066 [2024-11-26 04:11:18.659684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.066 [2024-11-26 04:11:18.659731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:17.066 [2024-11-26 04:11:18.659742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.066 [2024-11-26 04:11:18.659749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.066 [2024-11-26 04:11:18.659791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:17.067 [2024-11-26 04:11:18.659800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:17.067 [2024-11-26 04:11:18.659809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:17.067 [2024-11-26 04:11:18.659816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.067 [2024-11-26 04:11:18.659946] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 43.003 ms, result 0 00:15:17.325 00:15:17.325 00:15:17.325 04:11:18 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:15:17.325 04:11:18 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:17.891 04:11:19 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:17.891 [2024-11-26 04:11:19.455609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.891 [2024-11-26 04:11:19.455717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83457 ] 00:15:17.891 [2024-11-26 04:11:19.603666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.891 [2024-11-26 04:11:19.633838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.151 [2024-11-26 04:11:19.716032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:18.151 [2024-11-26 04:11:19.716104] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:18.151 [2024-11-26 04:11:19.861594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.861644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:18.151 [2024-11-26 04:11:19.861657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:18.151 [2024-11-26 04:11:19.861667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.863816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.863849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:18.151 [2024-11-26 04:11:19.863859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:15:18.151 [2024-11-26 04:11:19.863867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.863967] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:18.151 [2024-11-26 04:11:19.864192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:18.151 [2024-11-26 04:11:19.864205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.864217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:18.151 [2024-11-26 04:11:19.864225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:15:18.151 [2024-11-26 04:11:19.864233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.865353] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:18.151 [2024-11-26 04:11:19.867454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.867611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:18.151 [2024-11-26 04:11:19.867626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:15:18.151 [2024-11-26 04:11:19.867634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.867687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.867696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:18.151 [2024-11-26 04:11:19.867704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:18.151 [2024-11-26 04:11:19.867718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.872234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.872262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:18.151 [2024-11-26 04:11:19.872271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:15:18.151 [2024-11-26 04:11:19.872278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.872359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.872368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:18.151 [2024-11-26 04:11:19.872378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:18.151 [2024-11-26 04:11:19.872386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.872411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.872419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:18.151 [2024-11-26 04:11:19.872426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:18.151 [2024-11-26 04:11:19.872433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.872456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:18.151 [2024-11-26 04:11:19.873738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.873765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:18.151 [2024-11-26 04:11:19.873774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:15:18.151 [2024-11-26 04:11:19.873781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.873828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.873839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:18.151 [2024-11-26 04:11:19.873847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:18.151 [2024-11-26 04:11:19.873853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.873871] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:18.151 [2024-11-26 04:11:19.873887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:18.151 [2024-11-26 04:11:19.873919] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:18.151 [2024-11-26 04:11:19.873937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:18.151 [2024-11-26 04:11:19.874008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:18.151 [2024-11-26 04:11:19.874023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:18.151 [2024-11-26 04:11:19.874032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:18.151 [2024-11-26 04:11:19.874042] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874050] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874058] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:18.151 [2024-11-26 04:11:19.874068] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:18.151 [2024-11-26 04:11:19.874077] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:18.151 [2024-11-26 04:11:19.874084] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:18.151 [2024-11-26 04:11:19.874091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.874098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:18.151 [2024-11-26 04:11:19.874105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:15:18.151 [2024-11-26 04:11:19.874115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.874178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.151 [2024-11-26 04:11:19.874186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:18.151 [2024-11-26 04:11:19.874193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:18.151 [2024-11-26 04:11:19.874200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.151 [2024-11-26 04:11:19.874274] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:18.151 [2024-11-26 04:11:19.874286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:18.151 [2024-11-26 04:11:19.874294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:18.151 [2024-11-26 04:11:19.874315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:18.151 [2024-11-26 04:11:19.874335] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.151 [2024-11-26 04:11:19.874348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:18.151 [2024-11-26 04:11:19.874359] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:18.151 [2024-11-26 04:11:19.874366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:18.151 [2024-11-26 04:11:19.874372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:18.151 [2024-11-26 04:11:19.874378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:18.151 [2024-11-26 04:11:19.874385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:18.151 [2024-11-26 04:11:19.874402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:18.151 [2024-11-26 04:11:19.874409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:18.151 [2024-11-26 04:11:19.874425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:18.151 [2024-11-26 04:11:19.874432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:18.151 [2024-11-26 04:11:19.874447] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:18.151 [2024-11-26 04:11:19.874469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874483] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:18.151 [2024-11-26 04:11:19.874490] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:18.151 [2024-11-26 04:11:19.874548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:18.151 [2024-11-26 04:11:19.874555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:18.151 [2024-11-26 04:11:19.874562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:18.152 [2024-11-26 04:11:19.874569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:18.152 [2024-11-26 04:11:19.874577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.152 [2024-11-26 04:11:19.874584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:18.152 [2024-11-26 04:11:19.874591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:18.152 [2024-11-26 04:11:19.874598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:18.152 [2024-11-26 04:11:19.874605] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:18.152 [2024-11-26 04:11:19.874614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:18.152 [2024-11-26 04:11:19.874622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:18.152 [2024-11-26 04:11:19.874630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:18.152 [2024-11-26 04:11:19.874638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:18.152 [2024-11-26 04:11:19.874647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:18.152 [2024-11-26 04:11:19.874654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:18.152 [2024-11-26 04:11:19.874662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:18.152 [2024-11-26 04:11:19.874670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:18.152 [2024-11-26 04:11:19.874678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:18.152 [2024-11-26 04:11:19.874686] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:18.152 [2024-11-26 04:11:19.874701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.152 [2024-11-26 04:11:19.874713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:18.152 [2024-11-26 04:11:19.874721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:18.152 [2024-11-26 04:11:19.874729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:18.152 [2024-11-26 04:11:19.874737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:18.152 [2024-11-26 04:11:19.874745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:18.152 [2024-11-26 04:11:19.874753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:18.152 [2024-11-26 04:11:19.874761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:18.152 [2024-11-26 04:11:19.874768] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:18.152 [2024-11-26 04:11:19.874776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:18.152 [2024-11-26 04:11:19.874783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:18.152 [2024-11-26 04:11:19.874790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:18.152 [2024-11-26 04:11:19.874797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:18.152 [2024-11-26 04:11:19.874806] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:18.152 [2024-11-26 04:11:19.874812] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:18.152 [2024-11-26 04:11:19.874821] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:18.152 [2024-11-26 04:11:19.874828] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:18.152 [2024-11-26 04:11:19.874836] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:18.152 [2024-11-26 04:11:19.874842] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:18.152 [2024-11-26 04:11:19.874850] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:18.152 [2024-11-26 04:11:19.874857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.874864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:18.152 [2024-11-26 04:11:19.874871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:15:18.152 [2024-11-26 04:11:19.874878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.880801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.880911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:18.152 [2024-11-26 04:11:19.880966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.882 ms 00:15:18.152 [2024-11-26 04:11:19.880993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.881122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.881154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:18.152 [2024-11-26 04:11:19.881203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:18.152 [2024-11-26 04:11:19.881225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.895922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.896050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:18.152 [2024-11-26 04:11:19.896114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:15:18.152 [2024-11-26 04:11:19.896140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.896284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.896315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:18.152 [2024-11-26 04:11:19.896451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:18.152 [2024-11-26 04:11:19.896474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.896866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.896970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:18.152 [2024-11-26 04:11:19.897030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:15:18.152 [2024-11-26 04:11:19.897063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.897222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.897258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:18.152 [2024-11-26 04:11:19.897321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:15:18.152 [2024-11-26 04:11:19.897350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.902895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.903014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:18.152 [2024-11-26 04:11:19.903102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.504 ms 00:15:18.152 [2024-11-26 04:11:19.903131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.152 [2024-11-26 04:11:19.905552] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:18.152 [2024-11-26 04:11:19.905683] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:18.152 [2024-11-26 04:11:19.905775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.152 [2024-11-26 04:11:19.905802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:18.152 [2024-11-26 04:11:19.905937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:15:18.152 [2024-11-26 04:11:19.905966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.920927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.921026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:18.414 [2024-11-26 04:11:19.921073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.902 ms 00:15:18.414 [2024-11-26 04:11:19.921094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.923132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.923167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:18.414 [2024-11-26 04:11:19.923178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:15:18.414 [2024-11-26 04:11:19.923186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.924693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.924721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:18.414 [2024-11-26 04:11:19.924730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:15:18.414 [2024-11-26 04:11:19.924736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.924939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.924950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:18.414 [2024-11-26 04:11:19.924963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:15:18.414 [2024-11-26 04:11:19.924969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.942284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.942456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:18.414 [2024-11-26 04:11:19.942473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.294 ms 00:15:18.414 [2024-11-26 04:11:19.942481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.949867] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:18.414 [2024-11-26 04:11:19.963667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.963700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:18.414 [2024-11-26 04:11:19.963712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.089 ms 00:15:18.414 [2024-11-26 04:11:19.963720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.963788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.963801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:18.414 [2024-11-26 04:11:19.963809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:18.414 [2024-11-26 04:11:19.963817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.963860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.963868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:18.414 [2024-11-26 04:11:19.963876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:18.414 [2024-11-26 04:11:19.963883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.965044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.965164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:18.414 [2024-11-26 04:11:19.965181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:15:18.414 [2024-11-26 04:11:19.965189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.965222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.965235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:18.414 [2024-11-26 04:11:19.965245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:18.414 [2024-11-26 04:11:19.965253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.965282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:18.414 [2024-11-26 04:11:19.965291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.965302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:18.414 [2024-11-26 04:11:19.965309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:18.414 [2024-11-26 04:11:19.965318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.968740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.968771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:18.414 [2024-11-26 04:11:19.968781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:15:18.414 [2024-11-26 04:11:19.968789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.968870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:19.968881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:18.414 [2024-11-26 04:11:19.968889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:18.414 [2024-11-26 04:11:19.968896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:19.969731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:18.414 [2024-11-26 04:11:19.970695] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.856 ms, result 0 00:15:18.414 [2024-11-26 04:11:19.971297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:18.414 [2024-11-26 04:11:19.980666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:18.414  [2024-11-26T04:11:20.182Z] Copying: 4096/4096 [kB] (average 37 MBps)[2024-11-26 04:11:20.088989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:18.414 [2024-11-26 04:11:20.089899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.089934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:18.414 [2024-11-26 04:11:20.089950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:18.414 [2024-11-26 04:11:20.089962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.089983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:18.414 [2024-11-26 04:11:20.090375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.090388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:18.414 [2024-11-26 04:11:20.090401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:15:18.414 [2024-11-26 04:11:20.090408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.091981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.092090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:18.414 [2024-11-26 04:11:20.092105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:15:18.414 [2024-11-26 04:11:20.092112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.096016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.096093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:18.414 [2024-11-26 04:11:20.096141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.878 ms 00:15:18.414 [2024-11-26 04:11:20.096162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.103107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.103198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:18.414 [2024-11-26 04:11:20.103259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:15:18.414 [2024-11-26 04:11:20.103280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.104465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.104572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:18.414 [2024-11-26 04:11:20.104619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:15:18.414 [2024-11-26 04:11:20.104641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.108239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.108333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:18.414 [2024-11-26 04:11:20.108382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.560 ms 00:15:18.414 [2024-11-26 04:11:20.108413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.108556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.108587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:18.414 [2024-11-26 04:11:20.108641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:15:18.414 [2024-11-26 04:11:20.108663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.110378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.110473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:18.414 [2024-11-26 04:11:20.110535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:15:18.414 [2024-11-26 04:11:20.110558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.111913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.112005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:18.414 [2024-11-26 04:11:20.112049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:15:18.414 [2024-11-26 04:11:20.112069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.414 [2024-11-26 04:11:20.113326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.414 [2024-11-26 04:11:20.113416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:18.414 [2024-11-26 04:11:20.113463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:15:18.414 [2024-11-26 04:11:20.113484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.415 [2024-11-26 04:11:20.114329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.415 [2024-11-26 04:11:20.114416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:18.415 [2024-11-26 04:11:20.114463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:15:18.415 [2024-11-26 04:11:20.114484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.415 [2024-11-26 04:11:20.114546] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:18.415 [2024-11-26 04:11:20.114614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.114983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.115969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.116951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:18.415 [2024-11-26 04:11:20.117767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.117994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:18.416 [2024-11-26 04:11:20.118073] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:18.416 [2024-11-26 04:11:20.118080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:18.416 [2024-11-26 04:11:20.118088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:18.416 [2024-11-26 04:11:20.118094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:18.416 [2024-11-26 04:11:20.118101] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:18.416 [2024-11-26 04:11:20.118109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:18.416 [2024-11-26 04:11:20.118123] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:18.416 [2024-11-26 04:11:20.118130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:18.416 [2024-11-26 04:11:20.118137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:18.416 [2024-11-26 04:11:20.118143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:18.416 [2024-11-26 04:11:20.118149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:18.416 [2024-11-26 04:11:20.118156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.416 [2024-11-26 04:11:20.118166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:18.416 [2024-11-26 04:11:20.118179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 00:15:18.416 [2024-11-26 04:11:20.118186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.119477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.416 [2024-11-26 04:11:20.119517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:18.416 [2024-11-26 04:11:20.119526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:15:18.416 [2024-11-26 04:11:20.119533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.119588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.416 [2024-11-26 04:11:20.119596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:18.416 [2024-11-26 04:11:20.119603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:18.416 [2024-11-26 04:11:20.119610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.124352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.124383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:18.416 [2024-11-26 04:11:20.124396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.124403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.124465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.124474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:18.416 [2024-11-26 04:11:20.124481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.124488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.124545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.124555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:18.416 [2024-11-26 04:11:20.124563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.124570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.124587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.124596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:18.416 [2024-11-26 04:11:20.124604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.124611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.132797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.132835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:18.416 [2024-11-26 04:11:20.132845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.132865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:18.416 [2024-11-26 04:11:20.136361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:18.416 [2024-11-26 04:11:20.136411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:18.416 [2024-11-26 04:11:20.136466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:18.416 [2024-11-26 04:11:20.136567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:18.416 [2024-11-26 04:11:20.136627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:18.416 [2024-11-26 04:11:20.136684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:18.416 [2024-11-26 04:11:20.136742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:18.416 [2024-11-26 04:11:20.136752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:18.416 [2024-11-26 04:11:20.136759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.416 [2024-11-26 04:11:20.136902] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.968 ms, result 0 00:15:18.675 00:15:18.675 00:15:18.675 04:11:20 -- ftl/trim.sh@93 -- # svcpid=83471 00:15:18.675 04:11:20 -- ftl/trim.sh@94 -- # waitforlisten 83471 00:15:18.675 04:11:20 -- common/autotest_common.sh@829 -- # '[' -z 83471 ']' 00:15:18.675 04:11:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.675 04:11:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.675 04:11:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.675 04:11:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.675 04:11:20 -- common/autotest_common.sh@10 -- # set +x 00:15:18.675 04:11:20 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:18.675 [2024-11-26 04:11:20.386681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:18.675 [2024-11-26 04:11:20.386791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83471 ] 00:15:18.933 [2024-11-26 04:11:20.530998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.933 [2024-11-26 04:11:20.561073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.933 [2024-11-26 04:11:20.561252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.499 04:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.499 04:11:21 -- common/autotest_common.sh@862 -- # return 0 00:15:19.499 04:11:21 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:19.758 [2024-11-26 04:11:21.404661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:19.758 [2024-11-26 04:11:21.404907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:20.017 [2024-11-26 04:11:21.566132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.566182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:20.017 [2024-11-26 04:11:21.566197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:20.017 [2024-11-26 04:11:21.566208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.568359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.568396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.017 [2024-11-26 04:11:21.568408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:15:20.017 [2024-11-26 04:11:21.568415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.568555] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:20.017 [2024-11-26 04:11:21.568777] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:20.017 [2024-11-26 04:11:21.568797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.568804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.017 [2024-11-26 04:11:21.568814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:15:20.017 [2024-11-26 04:11:21.568821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.569891] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:20.017 [2024-11-26 04:11:21.571919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.571953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:20.017 [2024-11-26 04:11:21.571963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:15:20.017 [2024-11-26 04:11:21.571972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.572023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.572036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:20.017 [2024-11-26 04:11:21.572046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:20.017 [2024-11-26 04:11:21.572055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.576661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.576790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.017 [2024-11-26 04:11:21.576805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:15:20.017 [2024-11-26 04:11:21.576814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.576931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.017 [2024-11-26 04:11:21.576947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.017 [2024-11-26 04:11:21.576956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:15:20.017 [2024-11-26 04:11:21.576964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.017 [2024-11-26 04:11:21.576990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.018 [2024-11-26 04:11:21.576999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:20.018 [2024-11-26 04:11:21.577007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:20.018 [2024-11-26 04:11:21.577015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.018 [2024-11-26 04:11:21.577041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:20.018 [2024-11-26 04:11:21.578291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.018 [2024-11-26 04:11:21.578318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.018 [2024-11-26 04:11:21.578333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:15:20.018 [2024-11-26 04:11:21.578340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.018 [2024-11-26 04:11:21.578374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.018 [2024-11-26 04:11:21.578382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:20.018 [2024-11-26 04:11:21.578391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:20.018 [2024-11-26 04:11:21.578399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.018 [2024-11-26 04:11:21.578423] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:20.018 [2024-11-26 04:11:21.578442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:20.018 [2024-11-26 04:11:21.578481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:20.018 [2024-11-26 04:11:21.578496] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:20.018 [2024-11-26 04:11:21.578586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:20.018 [2024-11-26 04:11:21.578596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:20.018 [2024-11-26 04:11:21.578607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:20.018 [2024-11-26 04:11:21.578620] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:20.018 [2024-11-26 04:11:21.578631] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:20.018 [2024-11-26 04:11:21.578639] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:20.018 [2024-11-26 04:11:21.578649] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:20.018 [2024-11-26 04:11:21.578656] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:20.018 [2024-11-26 04:11:21.578664] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:20.018 [2024-11-26 04:11:21.578672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.018 [2024-11-26 04:11:21.578680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:20.018 [2024-11-26 04:11:21.578687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:15:20.018 [2024-11-26 04:11:21.578696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.018 [2024-11-26 04:11:21.578759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.018 [2024-11-26 04:11:21.578768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:20.018 [2024-11-26 04:11:21.578775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:20.018 [2024-11-26 04:11:21.578788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.018 [2024-11-26 04:11:21.578865] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:20.018 [2024-11-26 04:11:21.578875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:20.018 [2024-11-26 04:11:21.578883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.018 [2024-11-26 04:11:21.578893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.018 [2024-11-26 04:11:21.578901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:20.018 [2024-11-26 04:11:21.578909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:20.018 [2024-11-26 04:11:21.578915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:20.018 [2024-11-26 04:11:21.578925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:20.018 [2024-11-26 04:11:21.578932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:20.018 [2024-11-26 04:11:21.578941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.018 [2024-11-26 04:11:21.578949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:20.018 [2024-11-26 04:11:21.578958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:20.018 [2024-11-26 04:11:21.578966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:20.018 [2024-11-26 04:11:21.578974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:20.018 [2024-11-26 04:11:21.578982] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:20.018 [2024-11-26 04:11:21.578991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.018 [2024-11-26 04:11:21.578998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:20.018 [2024-11-26 04:11:21.579007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:20.018 [2024-11-26 04:11:21.579015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:20.018 [2024-11-26 04:11:21.579033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:20.018 [2024-11-26 04:11:21.579041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:20.018 [2024-11-26 04:11:21.579064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:20.018 [2024-11-26 04:11:21.579089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:20.018 [2024-11-26 04:11:21.579114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:20.018 [2024-11-26 04:11:21.579138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:20.018 [2024-11-26 04:11:21.579165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.018 [2024-11-26 04:11:21.579182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:20.018 [2024-11-26 04:11:21.579189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:20.018 [2024-11-26 04:11:21.579197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:20.018 [2024-11-26 04:11:21.579204] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:20.018 [2024-11-26 04:11:21.579214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:20.018 [2024-11-26 04:11:21.579222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:20.018 [2024-11-26 04:11:21.579242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:20.018 [2024-11-26 04:11:21.579251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:20.018 [2024-11-26 04:11:21.579258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:20.018 [2024-11-26 04:11:21.579269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:20.018 [2024-11-26 04:11:21.579276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:20.018 [2024-11-26 04:11:21.579285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:20.018 [2024-11-26 04:11:21.579293] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:20.018 [2024-11-26 04:11:21.579308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.018 [2024-11-26 04:11:21.579317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:20.018 [2024-11-26 04:11:21.579326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:20.018 [2024-11-26 04:11:21.579334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:20.018 [2024-11-26 04:11:21.579342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:20.018 [2024-11-26 04:11:21.579349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:20.018 [2024-11-26 04:11:21.579358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:20.018 [2024-11-26 04:11:21.579365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:20.018 [2024-11-26 04:11:21.579373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:20.018 [2024-11-26 04:11:21.579380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:20.018 [2024-11-26 04:11:21.579389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:20.018 [2024-11-26 04:11:21.579395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:20.018 [2024-11-26 04:11:21.579404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:20.018 [2024-11-26 04:11:21.579412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:20.019 [2024-11-26 04:11:21.579420] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:20.019 [2024-11-26 04:11:21.579428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:20.019 [2024-11-26 04:11:21.579442] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:20.019 [2024-11-26 04:11:21.579449] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:20.019 [2024-11-26 04:11:21.579459] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:20.019 [2024-11-26 04:11:21.579466] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:20.019 [2024-11-26 04:11:21.579475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.579482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:20.019 [2024-11-26 04:11:21.579491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:15:20.019 [2024-11-26 04:11:21.579509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.585214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.585247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.019 [2024-11-26 04:11:21.585260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:15:20.019 [2024-11-26 04:11:21.585268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.585380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.585389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:20.019 [2024-11-26 04:11:21.585399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:20.019 [2024-11-26 04:11:21.585410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.594120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.594152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.019 [2024-11-26 04:11:21.594166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.687 ms 00:15:20.019 [2024-11-26 04:11:21.594173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.594212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.594220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.019 [2024-11-26 04:11:21.594230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:20.019 [2024-11-26 04:11:21.594236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.594544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.594567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.019 [2024-11-26 04:11:21.594583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:15:20.019 [2024-11-26 04:11:21.594590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.594705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.594716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.019 [2024-11-26 04:11:21.594725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:20.019 [2024-11-26 04:11:21.594732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.599747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.599875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.019 [2024-11-26 04:11:21.599893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.992 ms 00:15:20.019 [2024-11-26 04:11:21.599906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.602240] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:15:20.019 [2024-11-26 04:11:21.602272] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:20.019 [2024-11-26 04:11:21.602285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.602297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:20.019 [2024-11-26 04:11:21.602306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:15:20.019 [2024-11-26 04:11:21.602313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.616620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.616649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:20.019 [2024-11-26 04:11:21.616662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:15:20.019 [2024-11-26 04:11:21.616672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.618546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.618654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:20.019 [2024-11-26 04:11:21.618673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:15:20.019 [2024-11-26 04:11:21.618680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.620035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.620065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:20.019 [2024-11-26 04:11:21.620076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:15:20.019 [2024-11-26 04:11:21.620084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.620314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.620325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:20.019 [2024-11-26 04:11:21.620335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:15:20.019 [2024-11-26 04:11:21.620346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.637482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.637534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:20.019 [2024-11-26 04:11:21.637548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.114 ms 00:15:20.019 [2024-11-26 04:11:21.637558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.644874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:20.019 [2024-11-26 04:11:21.658386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.658532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:20.019 [2024-11-26 04:11:21.658548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.749 ms 00:15:20.019 [2024-11-26 04:11:21.658561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.658633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.658643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:20.019 [2024-11-26 04:11:21.658652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:20.019 [2024-11-26 04:11:21.658660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.658705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.658719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:20.019 [2024-11-26 04:11:21.658726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:20.019 [2024-11-26 04:11:21.658735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.659872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.659904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:20.019 [2024-11-26 04:11:21.659913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:15:20.019 [2024-11-26 04:11:21.659921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.659949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.659960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:20.019 [2024-11-26 04:11:21.659968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:20.019 [2024-11-26 04:11:21.659976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.660009] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:20.019 [2024-11-26 04:11:21.660019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.660026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:20.019 [2024-11-26 04:11:21.660037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:20.019 [2024-11-26 04:11:21.660044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.663287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.663319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:20.019 [2024-11-26 04:11:21.663330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:15:20.019 [2024-11-26 04:11:21.663337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.663405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.019 [2024-11-26 04:11:21.663415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:20.019 [2024-11-26 04:11:21.663426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:20.019 [2024-11-26 04:11:21.663433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.019 [2024-11-26 04:11:21.664161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:20.019 [2024-11-26 04:11:21.665111] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.788 ms, result 0 00:15:20.019 [2024-11-26 04:11:21.665888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:20.019 Some configs were skipped because the RPC state that can call them passed over. 00:15:20.019 04:11:21 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:20.278 [2024-11-26 04:11:21.883434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.278 [2024-11-26 04:11:21.883489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:20.278 [2024-11-26 04:11:21.883515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.412 ms 00:15:20.278 [2024-11-26 04:11:21.883525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.278 [2024-11-26 04:11:21.883560] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.542 ms, result 0 00:15:20.278 true 00:15:20.278 04:11:21 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:20.538 [2024-11-26 04:11:22.079645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.079687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:20.538 [2024-11-26 04:11:22.079699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.800 ms 00:15:20.538 [2024-11-26 04:11:22.079707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.079743] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.898 ms, result 0 00:15:20.538 true 00:15:20.538 04:11:22 -- ftl/trim.sh@102 -- # killprocess 83471 00:15:20.538 04:11:22 -- common/autotest_common.sh@936 -- # '[' -z 83471 ']' 00:15:20.538 04:11:22 -- common/autotest_common.sh@940 -- # kill -0 83471 00:15:20.538 04:11:22 -- common/autotest_common.sh@941 -- # uname 00:15:20.538 04:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.538 04:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83471 00:15:20.538 killing process with pid 83471 00:15:20.538 04:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.538 04:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.538 04:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83471' 00:15:20.538 04:11:22 -- common/autotest_common.sh@955 -- # kill 83471 00:15:20.538 04:11:22 -- common/autotest_common.sh@960 -- # wait 83471 00:15:20.538 [2024-11-26 04:11:22.215862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.215917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:20.538 [2024-11-26 04:11:22.215933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:20.538 [2024-11-26 04:11:22.215944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.215966] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:20.538 [2024-11-26 04:11:22.216384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.216411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:20.538 [2024-11-26 04:11:22.216424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:15:20.538 [2024-11-26 04:11:22.216431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.216736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.216751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:20.538 [2024-11-26 04:11:22.216762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:15:20.538 [2024-11-26 04:11:22.216773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.220779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.220808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:20.538 [2024-11-26 04:11:22.220818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:15:20.538 [2024-11-26 04:11:22.220826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.227815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.227942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:20.538 [2024-11-26 04:11:22.227961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:15:20.538 [2024-11-26 04:11:22.227975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.229421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.229450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:20.538 [2024-11-26 04:11:22.229460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:15:20.538 [2024-11-26 04:11:22.229467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.233000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.233034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:20.538 [2024-11-26 04:11:22.233044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 00:15:20.538 [2024-11-26 04:11:22.233051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.233178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.233187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:20.538 [2024-11-26 04:11:22.233197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:15:20.538 [2024-11-26 04:11:22.233205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.235231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.538 [2024-11-26 04:11:22.235334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:20.538 [2024-11-26 04:11:22.235351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:15:20.538 [2024-11-26 04:11:22.235359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.538 [2024-11-26 04:11:22.236528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.539 [2024-11-26 04:11:22.236549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:20.539 [2024-11-26 04:11:22.236559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:15:20.539 [2024-11-26 04:11:22.236565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.539 [2024-11-26 04:11:22.237450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.539 [2024-11-26 04:11:22.237478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:20.539 [2024-11-26 04:11:22.237488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:15:20.539 [2024-11-26 04:11:22.237494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.539 [2024-11-26 04:11:22.238650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.539 [2024-11-26 04:11:22.238677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:20.539 [2024-11-26 04:11:22.238687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:15:20.539 [2024-11-26 04:11:22.238693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.539 [2024-11-26 04:11:22.238724] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:20.539 [2024-11-26 04:11:22.238737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.238995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:20.539 [2024-11-26 04:11:22.239377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:20.540 [2024-11-26 04:11:22.239587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:20.540 [2024-11-26 04:11:22.239596] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:20.540 [2024-11-26 04:11:22.239604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:20.540 [2024-11-26 04:11:22.239612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:20.540 [2024-11-26 04:11:22.239619] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:20.540 [2024-11-26 04:11:22.239628] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:20.540 [2024-11-26 04:11:22.239634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:20.540 [2024-11-26 04:11:22.239643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:20.540 [2024-11-26 04:11:22.239652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:20.540 [2024-11-26 04:11:22.239659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:20.540 [2024-11-26 04:11:22.239665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:20.540 [2024-11-26 04:11:22.239675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.540 [2024-11-26 04:11:22.239682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:20.540 [2024-11-26 04:11:22.239693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:15:20.540 [2024-11-26 04:11:22.239699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.241008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.540 [2024-11-26 04:11:22.241026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:20.540 [2024-11-26 04:11:22.241036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:15:20.540 [2024-11-26 04:11:22.241043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.241106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.540 [2024-11-26 04:11:22.241117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:20.540 [2024-11-26 04:11:22.241127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:20.540 [2024-11-26 04:11:22.241134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.246262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.246369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.540 [2024-11-26 04:11:22.246423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.246447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.246539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.246573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.540 [2024-11-26 04:11:22.246629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.246651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.246708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.246740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.540 [2024-11-26 04:11:22.246762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.246837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.246876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.246899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.540 [2024-11-26 04:11:22.246991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.247012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.255956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.256110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.540 [2024-11-26 04:11:22.256161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.256185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.259779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.259886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.540 [2024-11-26 04:11:22.259942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.259964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.260004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.260108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.540 [2024-11-26 04:11:22.260134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.260154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.260198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.260239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.540 [2024-11-26 04:11:22.260260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.260279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.260393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.260455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.540 [2024-11-26 04:11:22.260519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.260543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.260632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.260664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:20.540 [2024-11-26 04:11:22.260691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.260735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.260783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.260838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.540 [2024-11-26 04:11:22.260863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.260921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.261029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.540 [2024-11-26 04:11:22.261061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.540 [2024-11-26 04:11:22.261146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.540 [2024-11-26 04:11:22.261169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.540 [2024-11-26 04:11:22.261313] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.428 ms, result 0 00:15:20.799 04:11:22 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:20.800 [2024-11-26 04:11:22.476089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.800 [2024-11-26 04:11:22.476389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83507 ] 00:15:21.058 [2024-11-26 04:11:22.624841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.058 [2024-11-26 04:11:22.655409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.058 [2024-11-26 04:11:22.738661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:21.058 [2024-11-26 04:11:22.738820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:21.318 [2024-11-26 04:11:22.883800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.318 [2024-11-26 04:11:22.883989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:21.318 [2024-11-26 04:11:22.884047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:21.319 [2024-11-26 04:11:22.884075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.886287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.886400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:21.319 [2024-11-26 04:11:22.886490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:15:21.319 [2024-11-26 04:11:22.886530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.886711] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:21.319 [2024-11-26 04:11:22.887021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:21.319 [2024-11-26 04:11:22.887110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.887158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:21.319 [2024-11-26 04:11:22.887186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:15:21.319 [2024-11-26 04:11:22.887208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.888406] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:21.319 [2024-11-26 04:11:22.890718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.890825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:21.319 [2024-11-26 04:11:22.890876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:15:21.319 [2024-11-26 04:11:22.890906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.890969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.891051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:21.319 [2024-11-26 04:11:22.891101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:21.319 [2024-11-26 04:11:22.891122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.895694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.895788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:21.319 [2024-11-26 04:11:22.895802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.517 ms 00:15:21.319 [2024-11-26 04:11:22.895809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.895893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.895906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:21.319 [2024-11-26 04:11:22.895916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:21.319 [2024-11-26 04:11:22.895923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.895950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.895958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:21.319 [2024-11-26 04:11:22.895965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:21.319 [2024-11-26 04:11:22.895972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.895996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:21.319 [2024-11-26 04:11:22.897280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.897309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:21.319 [2024-11-26 04:11:22.897318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:15:21.319 [2024-11-26 04:11:22.897325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.897361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.897372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:21.319 [2024-11-26 04:11:22.897379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:21.319 [2024-11-26 04:11:22.897389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.897407] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:21.319 [2024-11-26 04:11:22.897425] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:21.319 [2024-11-26 04:11:22.897461] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:21.319 [2024-11-26 04:11:22.897478] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:21.319 [2024-11-26 04:11:22.897566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:21.319 [2024-11-26 04:11:22.897579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:21.319 [2024-11-26 04:11:22.897589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:21.319 [2024-11-26 04:11:22.897599] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:21.319 [2024-11-26 04:11:22.897607] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:21.319 [2024-11-26 04:11:22.897616] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:21.319 [2024-11-26 04:11:22.897623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:21.319 [2024-11-26 04:11:22.897634] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:21.319 [2024-11-26 04:11:22.897641] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:21.319 [2024-11-26 04:11:22.897648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.897654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:21.319 [2024-11-26 04:11:22.897662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:15:21.319 [2024-11-26 04:11:22.897668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.897740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.319 [2024-11-26 04:11:22.897748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:21.319 [2024-11-26 04:11:22.897761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:21.319 [2024-11-26 04:11:22.897772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.319 [2024-11-26 04:11:22.897850] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:21.319 [2024-11-26 04:11:22.897859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:21.319 [2024-11-26 04:11:22.897867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:21.319 [2024-11-26 04:11:22.897878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.319 [2024-11-26 04:11:22.897885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:21.319 [2024-11-26 04:11:22.897891] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:21.320 [2024-11-26 04:11:22.897898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:21.320 [2024-11-26 04:11:22.897905] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:21.320 [2024-11-26 04:11:22.897911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:21.320 [2024-11-26 04:11:22.897918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:21.320 [2024-11-26 04:11:22.897925] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:21.320 [2024-11-26 04:11:22.897937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:21.320 [2024-11-26 04:11:22.897944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:21.320 [2024-11-26 04:11:22.897952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:21.320 [2024-11-26 04:11:22.897959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:21.320 [2024-11-26 04:11:22.897968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.320 [2024-11-26 04:11:22.897975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:21.320 [2024-11-26 04:11:22.897983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:21.320 [2024-11-26 04:11:22.897991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.320 [2024-11-26 04:11:22.897998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:21.320 [2024-11-26 04:11:22.898006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:21.320 [2024-11-26 04:11:22.898013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:21.320 [2024-11-26 04:11:22.898028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:21.320 [2024-11-26 04:11:22.898050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:21.320 [2024-11-26 04:11:22.898072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898090] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:21.320 [2024-11-26 04:11:22.898098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:21.320 [2024-11-26 04:11:22.898119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:21.320 [2024-11-26 04:11:22.898133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:21.320 [2024-11-26 04:11:22.898140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:21.320 [2024-11-26 04:11:22.898147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:21.320 [2024-11-26 04:11:22.898154] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:21.320 [2024-11-26 04:11:22.898162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:21.320 [2024-11-26 04:11:22.898170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:21.320 [2024-11-26 04:11:22.898186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:21.320 [2024-11-26 04:11:22.898194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:21.320 [2024-11-26 04:11:22.898201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:21.320 [2024-11-26 04:11:22.898210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:21.320 [2024-11-26 04:11:22.898217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:21.320 [2024-11-26 04:11:22.898225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:21.320 [2024-11-26 04:11:22.898233] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:21.320 [2024-11-26 04:11:22.898243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:21.320 [2024-11-26 04:11:22.898254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:21.320 [2024-11-26 04:11:22.898262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:21.320 [2024-11-26 04:11:22.898270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:21.320 [2024-11-26 04:11:22.898278] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:21.320 [2024-11-26 04:11:22.898286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:21.320 [2024-11-26 04:11:22.898294] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:21.320 [2024-11-26 04:11:22.898302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:21.320 [2024-11-26 04:11:22.898310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:21.320 [2024-11-26 04:11:22.898318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:21.320 [2024-11-26 04:11:22.898326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:21.320 [2024-11-26 04:11:22.898333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:21.320 [2024-11-26 04:11:22.898342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:21.320 [2024-11-26 04:11:22.898349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:21.320 [2024-11-26 04:11:22.898356] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:21.320 [2024-11-26 04:11:22.898363] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:21.320 [2024-11-26 04:11:22.898371] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:21.321 [2024-11-26 04:11:22.898378] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:21.321 [2024-11-26 04:11:22.898384] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:21.321 [2024-11-26 04:11:22.898392] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:21.321 [2024-11-26 04:11:22.898399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.898406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:21.321 [2024-11-26 04:11:22.898413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:15:21.321 [2024-11-26 04:11:22.898419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.904367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.904470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:21.321 [2024-11-26 04:11:22.904537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.901 ms 00:15:21.321 [2024-11-26 04:11:22.904566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.904683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.904722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:21.321 [2024-11-26 04:11:22.904772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:21.321 [2024-11-26 04:11:22.904794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.920947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.921144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:21.321 [2024-11-26 04:11:22.921231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.116 ms 00:15:21.321 [2024-11-26 04:11:22.921269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.921385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.921432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:21.321 [2024-11-26 04:11:22.921462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:21.321 [2024-11-26 04:11:22.921582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.921957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.922086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:21.321 [2024-11-26 04:11:22.922154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:15:21.321 [2024-11-26 04:11:22.922190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.922365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.922410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:21.321 [2024-11-26 04:11:22.922490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:15:21.321 [2024-11-26 04:11:22.922543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.928425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.928583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:21.321 [2024-11-26 04:11:22.928656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.834 ms 00:15:21.321 [2024-11-26 04:11:22.928732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.931115] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:15:21.321 [2024-11-26 04:11:22.931217] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:21.321 [2024-11-26 04:11:22.931237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.931244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:21.321 [2024-11-26 04:11:22.931257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:15:21.321 [2024-11-26 04:11:22.931265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.945755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.945847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:21.321 [2024-11-26 04:11:22.945894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.452 ms 00:15:21.321 [2024-11-26 04:11:22.945916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.947620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.947717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:21.321 [2024-11-26 04:11:22.947762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:15:21.321 [2024-11-26 04:11:22.947783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.948994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.949086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:21.321 [2024-11-26 04:11:22.949149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:15:21.321 [2024-11-26 04:11:22.949169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.949375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.949436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:21.321 [2024-11-26 04:11:22.949547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:15:21.321 [2024-11-26 04:11:22.949577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.967041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.967209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:21.321 [2024-11-26 04:11:22.967258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.425 ms 00:15:21.321 [2024-11-26 04:11:22.967280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.974947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:21.321 [2024-11-26 04:11:22.988995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.989165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:21.321 [2024-11-26 04:11:22.989218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.353 ms 00:15:21.321 [2024-11-26 04:11:22.989242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.989345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.989375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:21.321 [2024-11-26 04:11:22.989395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:21.321 [2024-11-26 04:11:22.989414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.321 [2024-11-26 04:11:22.989478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.321 [2024-11-26 04:11:22.989606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:21.321 [2024-11-26 04:11:22.989627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:21.321 [2024-11-26 04:11:22.989649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.990834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.322 [2024-11-26 04:11:22.990940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:21.322 [2024-11-26 04:11:22.990996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:15:21.322 [2024-11-26 04:11:22.991017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.991070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.322 [2024-11-26 04:11:22.991128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:21.322 [2024-11-26 04:11:22.991158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:21.322 [2024-11-26 04:11:22.991177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.991250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:21.322 [2024-11-26 04:11:22.991276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.322 [2024-11-26 04:11:22.991295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:21.322 [2024-11-26 04:11:22.991373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:21.322 [2024-11-26 04:11:22.991425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.994850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.322 [2024-11-26 04:11:22.994954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:21.322 [2024-11-26 04:11:22.995003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:15:21.322 [2024-11-26 04:11:22.995026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.995104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.322 [2024-11-26 04:11:22.995182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:21.322 [2024-11-26 04:11:22.995228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:21.322 [2024-11-26 04:11:22.995246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.322 [2024-11-26 04:11:22.996179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:21.322 [2024-11-26 04:11:22.997235] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.130 ms, result 0 00:15:21.322 [2024-11-26 04:11:22.997863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:21.322 [2024-11-26 04:11:23.007293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:22.695  [2024-11-26T04:11:25.397Z] Copying: 46/256 [MB] (46 MBps) [2024-11-26T04:11:26.330Z] Copying: 92/256 [MB] (46 MBps) [2024-11-26T04:11:27.291Z] Copying: 134/256 [MB] (41 MBps) [2024-11-26T04:11:28.240Z] Copying: 179/256 [MB] (44 MBps) [2024-11-26T04:11:28.806Z] Copying: 222/256 [MB] (42 MBps) [2024-11-26T04:11:29.374Z] Copying: 256/256 [MB] (average 44 MBps)[2024-11-26 04:11:29.199297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:27.606 [2024-11-26 04:11:29.200480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.200537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:27.606 [2024-11-26 04:11:29.200554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:27.606 [2024-11-26 04:11:29.200562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.200585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:27.606 [2024-11-26 04:11:29.201046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.201091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:27.606 [2024-11-26 04:11:29.201101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:15:27.606 [2024-11-26 04:11:29.201109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.201390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.201410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:27.606 [2024-11-26 04:11:29.201419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:15:27.606 [2024-11-26 04:11:29.201428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.205281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.205306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:27.606 [2024-11-26 04:11:29.205316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.834 ms 00:15:27.606 [2024-11-26 04:11:29.205324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.212414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.212586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:27.606 [2024-11-26 04:11:29.212604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.056 ms 00:15:27.606 [2024-11-26 04:11:29.212613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.214211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.214245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:27.606 [2024-11-26 04:11:29.214254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:15:27.606 [2024-11-26 04:11:29.214261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.217690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.217722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:27.606 [2024-11-26 04:11:29.217731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:15:27.606 [2024-11-26 04:11:29.217748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.217871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.217881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:27.606 [2024-11-26 04:11:29.217889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:27.606 [2024-11-26 04:11:29.217896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.606 [2024-11-26 04:11:29.219734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.606 [2024-11-26 04:11:29.219856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:27.606 [2024-11-26 04:11:29.219871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.818 ms 00:15:27.607 [2024-11-26 04:11:29.219878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.607 [2024-11-26 04:11:29.220984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.607 [2024-11-26 04:11:29.221015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:27.607 [2024-11-26 04:11:29.221024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:15:27.607 [2024-11-26 04:11:29.221030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.607 [2024-11-26 04:11:29.223709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.607 [2024-11-26 04:11:29.223740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:27.607 [2024-11-26 04:11:29.223750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:15:27.607 [2024-11-26 04:11:29.223757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.607 [2024-11-26 04:11:29.224787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.607 [2024-11-26 04:11:29.224817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:27.607 [2024-11-26 04:11:29.224825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:15:27.607 [2024-11-26 04:11:29.224832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.607 [2024-11-26 04:11:29.224860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:27.607 [2024-11-26 04:11:29.224874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.224996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:27.607 [2024-11-26 04:11:29.225436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:27.608 [2024-11-26 04:11:29.225637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:27.608 [2024-11-26 04:11:29.225645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a761f2a-7b57-4683-924b-3bfae6ab70d6 00:15:27.608 [2024-11-26 04:11:29.225652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:27.608 [2024-11-26 04:11:29.225665] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:27.608 [2024-11-26 04:11:29.225672] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:27.608 [2024-11-26 04:11:29.225680] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:27.608 [2024-11-26 04:11:29.225694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:27.608 [2024-11-26 04:11:29.225702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:27.608 [2024-11-26 04:11:29.225709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:27.608 [2024-11-26 04:11:29.225715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:27.608 [2024-11-26 04:11:29.225722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:27.608 [2024-11-26 04:11:29.225728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.608 [2024-11-26 04:11:29.225735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:27.608 [2024-11-26 04:11:29.225750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:15:27.608 [2024-11-26 04:11:29.225757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.227190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.608 [2024-11-26 04:11:29.227210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:27.608 [2024-11-26 04:11:29.227219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:15:27.608 [2024-11-26 04:11:29.227225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.227283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.608 [2024-11-26 04:11:29.227291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:27.608 [2024-11-26 04:11:29.227299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:27.608 [2024-11-26 04:11:29.227306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.232418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.232882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:27.608 [2024-11-26 04:11:29.232978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.233019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.233214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.233242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:27.608 [2024-11-26 04:11:29.233263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.233291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.233404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.233429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:27.608 [2024-11-26 04:11:29.233450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.233468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.233555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.233585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:27.608 [2024-11-26 04:11:29.233605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.233625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.247321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.247365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:27.608 [2024-11-26 04:11:29.247376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.247384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:27.608 [2024-11-26 04:11:29.251353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:27.608 [2024-11-26 04:11:29.251410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:27.608 [2024-11-26 04:11:29.251475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:27.608 [2024-11-26 04:11:29.251574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:27.608 [2024-11-26 04:11:29.251628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:27.608 [2024-11-26 04:11:29.251689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.608 [2024-11-26 04:11:29.251747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:27.608 [2024-11-26 04:11:29.251754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.608 [2024-11-26 04:11:29.251764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.608 [2024-11-26 04:11:29.251903] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.389 ms, result 0 00:15:27.866 00:15:27.867 00:15:27.867 04:11:29 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:15:28.432 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:15:28.432 04:11:29 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:28.432 04:11:29 -- ftl/trim.sh@109 -- # fio_kill 00:15:28.432 04:11:29 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:15:28.432 04:11:29 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:28.432 04:11:29 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:15:28.432 04:11:30 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:28.432 Process with pid 83471 is not found 00:15:28.432 04:11:30 -- ftl/trim.sh@20 -- # killprocess 83471 00:15:28.432 04:11:30 -- common/autotest_common.sh@936 -- # '[' -z 83471 ']' 00:15:28.432 04:11:30 -- common/autotest_common.sh@940 -- # kill -0 83471 00:15:28.433 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83471) - No such process 00:15:28.433 04:11:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83471 is not found' 00:15:28.433 ************************************ 00:15:28.433 END TEST ftl_trim 00:15:28.433 ************************************ 00:15:28.433 00:15:28.433 real 0m42.034s 00:15:28.433 user 1m4.186s 00:15:28.433 sys 0m4.897s 00:15:28.433 04:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:28.433 04:11:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 04:11:30 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:15:28.433 04:11:30 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:28.433 04:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.433 04:11:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 ************************************ 00:15:28.433 START TEST ftl_restore 00:15:28.433 ************************************ 00:15:28.433 04:11:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:15:28.433 * Looking for test storage... 00:15:28.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.433 04:11:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:28.433 04:11:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:28.433 04:11:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:28.692 04:11:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:28.692 04:11:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:28.692 04:11:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:28.692 04:11:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:28.692 04:11:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:28.692 04:11:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:28.692 04:11:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.692 04:11:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:28.692 04:11:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:28.692 04:11:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:28.692 04:11:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:28.692 04:11:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:28.692 04:11:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:28.692 04:11:30 -- scripts/common.sh@344 -- # : 1 00:15:28.692 04:11:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:28.692 04:11:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.692 04:11:30 -- scripts/common.sh@364 -- # decimal 1 00:15:28.692 04:11:30 -- scripts/common.sh@352 -- # local d=1 00:15:28.692 04:11:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.692 04:11:30 -- scripts/common.sh@354 -- # echo 1 00:15:28.692 04:11:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:28.692 04:11:30 -- scripts/common.sh@365 -- # decimal 2 00:15:28.692 04:11:30 -- scripts/common.sh@352 -- # local d=2 00:15:28.692 04:11:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.692 04:11:30 -- scripts/common.sh@354 -- # echo 2 00:15:28.692 04:11:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:28.692 04:11:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:28.692 04:11:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:28.692 04:11:30 -- scripts/common.sh@367 -- # return 0 00:15:28.692 04:11:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.692 04:11:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.692 --rc genhtml_branch_coverage=1 00:15:28.692 --rc genhtml_function_coverage=1 00:15:28.692 --rc genhtml_legend=1 00:15:28.692 --rc geninfo_all_blocks=1 00:15:28.692 --rc geninfo_unexecuted_blocks=1 00:15:28.692 00:15:28.692 ' 00:15:28.692 04:11:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.692 --rc genhtml_branch_coverage=1 00:15:28.692 --rc genhtml_function_coverage=1 00:15:28.692 --rc genhtml_legend=1 00:15:28.692 --rc geninfo_all_blocks=1 00:15:28.692 --rc geninfo_unexecuted_blocks=1 00:15:28.692 00:15:28.692 ' 00:15:28.692 04:11:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.692 --rc genhtml_branch_coverage=1 00:15:28.692 --rc genhtml_function_coverage=1 00:15:28.692 --rc genhtml_legend=1 00:15:28.692 --rc geninfo_all_blocks=1 00:15:28.692 --rc geninfo_unexecuted_blocks=1 00:15:28.692 00:15:28.692 ' 00:15:28.692 04:11:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.692 --rc genhtml_branch_coverage=1 00:15:28.692 --rc genhtml_function_coverage=1 00:15:28.692 --rc genhtml_legend=1 00:15:28.692 --rc geninfo_all_blocks=1 00:15:28.692 --rc geninfo_unexecuted_blocks=1 00:15:28.692 00:15:28.692 ' 00:15:28.692 04:11:30 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:28.692 04:11:30 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:15:28.692 04:11:30 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.692 04:11:30 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:28.692 04:11:30 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:28.692 04:11:30 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:28.692 04:11:30 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.692 04:11:30 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:28.692 04:11:30 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:28.692 04:11:30 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.692 04:11:30 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.692 04:11:30 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:28.692 04:11:30 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:28.692 04:11:30 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:28.692 04:11:30 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:28.692 04:11:30 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:28.692 04:11:30 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:28.692 04:11:30 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.692 04:11:30 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.692 04:11:30 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:28.692 04:11:30 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:28.692 04:11:30 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:28.692 04:11:30 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:28.692 04:11:30 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:28.692 04:11:30 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:28.692 04:11:30 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:28.692 04:11:30 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:28.692 04:11:30 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.692 04:11:30 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.692 04:11:30 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.692 04:11:30 -- ftl/restore.sh@13 -- # mktemp -d 00:15:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.692 04:11:30 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.G7HYbvcHLR 00:15:28.692 04:11:30 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:15:28.692 04:11:30 -- ftl/restore.sh@16 -- # case $opt in 00:15:28.692 04:11:30 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:15:28.692 04:11:30 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:15:28.692 04:11:30 -- ftl/restore.sh@23 -- # shift 2 00:15:28.692 04:11:30 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:15:28.692 04:11:30 -- ftl/restore.sh@25 -- # timeout=240 00:15:28.692 04:11:30 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:15:28.692 04:11:30 -- ftl/restore.sh@39 -- # svcpid=83654 00:15:28.693 04:11:30 -- ftl/restore.sh@41 -- # waitforlisten 83654 00:15:28.693 04:11:30 -- common/autotest_common.sh@829 -- # '[' -z 83654 ']' 00:15:28.693 04:11:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.693 04:11:30 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.693 04:11:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.693 04:11:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.693 04:11:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.693 04:11:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.693 [2024-11-26 04:11:30.303207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:28.693 [2024-11-26 04:11:30.303492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83654 ] 00:15:28.693 [2024-11-26 04:11:30.453139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.952 [2024-11-26 04:11:30.484311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.952 [2024-11-26 04:11:30.484518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.519 04:11:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.519 04:11:31 -- common/autotest_common.sh@862 -- # return 0 00:15:29.519 04:11:31 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:29.519 04:11:31 -- ftl/common.sh@54 -- # local name=nvme0 00:15:29.519 04:11:31 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:29.519 04:11:31 -- ftl/common.sh@56 -- # local size=103424 00:15:29.519 04:11:31 -- ftl/common.sh@59 -- # local base_bdev 00:15:29.519 04:11:31 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:29.777 04:11:31 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:29.777 04:11:31 -- ftl/common.sh@62 -- # local base_size 00:15:29.777 04:11:31 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:29.777 04:11:31 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:29.777 04:11:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:29.777 04:11:31 -- common/autotest_common.sh@1369 -- # local bs 00:15:29.777 04:11:31 -- common/autotest_common.sh@1370 -- # local nb 00:15:29.777 04:11:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:30.035 04:11:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:30.035 { 00:15:30.035 "name": "nvme0n1", 00:15:30.035 "aliases": [ 00:15:30.035 "298aaf34-341d-49fd-b77a-6e28b1b3335f" 00:15:30.035 ], 00:15:30.035 "product_name": "NVMe disk", 00:15:30.036 "block_size": 4096, 00:15:30.036 "num_blocks": 1310720, 00:15:30.036 "uuid": "298aaf34-341d-49fd-b77a-6e28b1b3335f", 00:15:30.036 "assigned_rate_limits": { 00:15:30.036 "rw_ios_per_sec": 0, 00:15:30.036 "rw_mbytes_per_sec": 0, 00:15:30.036 "r_mbytes_per_sec": 0, 00:15:30.036 "w_mbytes_per_sec": 0 00:15:30.036 }, 00:15:30.036 "claimed": true, 00:15:30.036 "claim_type": "read_many_write_one", 00:15:30.036 "zoned": false, 00:15:30.036 "supported_io_types": { 00:15:30.036 "read": true, 00:15:30.036 "write": true, 00:15:30.036 "unmap": true, 00:15:30.036 "write_zeroes": true, 00:15:30.036 "flush": true, 00:15:30.036 "reset": true, 00:15:30.036 "compare": true, 00:15:30.036 "compare_and_write": false, 00:15:30.036 "abort": true, 00:15:30.036 "nvme_admin": true, 00:15:30.036 "nvme_io": true 00:15:30.036 }, 00:15:30.036 "driver_specific": { 00:15:30.036 "nvme": [ 00:15:30.036 { 00:15:30.036 "pci_address": "0000:00:07.0", 00:15:30.036 "trid": { 00:15:30.036 "trtype": "PCIe", 00:15:30.036 "traddr": "0000:00:07.0" 00:15:30.036 }, 00:15:30.036 "ctrlr_data": { 00:15:30.036 "cntlid": 0, 00:15:30.036 "vendor_id": "0x1b36", 00:15:30.036 "model_number": "QEMU NVMe Ctrl", 00:15:30.036 "serial_number": "12341", 00:15:30.036 "firmware_revision": "8.0.0", 00:15:30.036 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:30.036 "oacs": { 00:15:30.036 "security": 0, 00:15:30.036 "format": 1, 00:15:30.036 "firmware": 0, 00:15:30.036 "ns_manage": 1 00:15:30.036 }, 00:15:30.036 "multi_ctrlr": false, 00:15:30.036 "ana_reporting": false 00:15:30.036 }, 00:15:30.036 "vs": { 00:15:30.036 "nvme_version": "1.4" 00:15:30.036 }, 00:15:30.036 "ns_data": { 00:15:30.036 "id": 1, 00:15:30.036 "can_share": false 00:15:30.036 } 00:15:30.036 } 00:15:30.036 ], 00:15:30.036 "mp_policy": "active_passive" 00:15:30.036 } 00:15:30.036 } 00:15:30.036 ]' 00:15:30.036 04:11:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:30.036 04:11:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:30.036 04:11:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:30.036 04:11:31 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:30.036 04:11:31 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:30.036 04:11:31 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:30.036 04:11:31 -- ftl/common.sh@63 -- # base_size=5120 00:15:30.036 04:11:31 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:30.036 04:11:31 -- ftl/common.sh@67 -- # clear_lvols 00:15:30.036 04:11:31 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:30.036 04:11:31 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:30.294 04:11:31 -- ftl/common.sh@28 -- # stores=33be9b9a-ef14-4859-b5d7-afbf78f2f904 00:15:30.294 04:11:31 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:30.294 04:11:31 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33be9b9a-ef14-4859-b5d7-afbf78f2f904 00:15:30.294 04:11:32 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:30.553 04:11:32 -- ftl/common.sh@68 -- # lvs=8d5588d7-d2b7-43cf-96b4-dd7923846e5e 00:15:30.553 04:11:32 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8d5588d7-d2b7-43cf-96b4-dd7923846e5e 00:15:30.811 04:11:32 -- ftl/restore.sh@43 -- # split_bdev=68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:30.811 04:11:32 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:15:30.811 04:11:32 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:30.811 04:11:32 -- ftl/common.sh@35 -- # local name=nvc0 00:15:30.811 04:11:32 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:30.811 04:11:32 -- ftl/common.sh@37 -- # local base_bdev=68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:30.811 04:11:32 -- ftl/common.sh@38 -- # local cache_size= 00:15:30.811 04:11:32 -- ftl/common.sh@41 -- # get_bdev_size 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:30.811 04:11:32 -- common/autotest_common.sh@1367 -- # local bdev_name=68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:30.811 04:11:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:30.811 04:11:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:30.811 04:11:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:30.811 04:11:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.071 04:11:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:31.071 { 00:15:31.071 "name": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.071 "aliases": [ 00:15:31.071 "lvs/nvme0n1p0" 00:15:31.071 ], 00:15:31.071 "product_name": "Logical Volume", 00:15:31.071 "block_size": 4096, 00:15:31.071 "num_blocks": 26476544, 00:15:31.071 "uuid": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.071 "assigned_rate_limits": { 00:15:31.071 "rw_ios_per_sec": 0, 00:15:31.071 "rw_mbytes_per_sec": 0, 00:15:31.071 "r_mbytes_per_sec": 0, 00:15:31.071 "w_mbytes_per_sec": 0 00:15:31.071 }, 00:15:31.071 "claimed": false, 00:15:31.071 "zoned": false, 00:15:31.071 "supported_io_types": { 00:15:31.071 "read": true, 00:15:31.071 "write": true, 00:15:31.071 "unmap": true, 00:15:31.071 "write_zeroes": true, 00:15:31.071 "flush": false, 00:15:31.071 "reset": true, 00:15:31.071 "compare": false, 00:15:31.071 "compare_and_write": false, 00:15:31.071 "abort": false, 00:15:31.071 "nvme_admin": false, 00:15:31.071 "nvme_io": false 00:15:31.071 }, 00:15:31.071 "driver_specific": { 00:15:31.071 "lvol": { 00:15:31.071 "lvol_store_uuid": "8d5588d7-d2b7-43cf-96b4-dd7923846e5e", 00:15:31.071 "base_bdev": "nvme0n1", 00:15:31.071 "thin_provision": true, 00:15:31.071 "snapshot": false, 00:15:31.071 "clone": false, 00:15:31.071 "esnap_clone": false 00:15:31.071 } 00:15:31.071 } 00:15:31.071 } 00:15:31.071 ]' 00:15:31.071 04:11:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:31.071 04:11:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:31.071 04:11:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:31.071 04:11:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:31.071 04:11:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:31.071 04:11:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:31.071 04:11:32 -- ftl/common.sh@41 -- # local base_size=5171 00:15:31.071 04:11:32 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:31.071 04:11:32 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:31.331 04:11:32 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:31.331 04:11:32 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:31.331 04:11:32 -- ftl/common.sh@48 -- # get_bdev_size 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.331 04:11:32 -- common/autotest_common.sh@1367 -- # local bdev_name=68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.331 04:11:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:31.331 04:11:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:31.331 04:11:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:31.331 04:11:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.589 04:11:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:31.589 { 00:15:31.589 "name": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.589 "aliases": [ 00:15:31.589 "lvs/nvme0n1p0" 00:15:31.589 ], 00:15:31.590 "product_name": "Logical Volume", 00:15:31.590 "block_size": 4096, 00:15:31.590 "num_blocks": 26476544, 00:15:31.590 "uuid": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.590 "assigned_rate_limits": { 00:15:31.590 "rw_ios_per_sec": 0, 00:15:31.590 "rw_mbytes_per_sec": 0, 00:15:31.590 "r_mbytes_per_sec": 0, 00:15:31.590 "w_mbytes_per_sec": 0 00:15:31.590 }, 00:15:31.590 "claimed": false, 00:15:31.590 "zoned": false, 00:15:31.590 "supported_io_types": { 00:15:31.590 "read": true, 00:15:31.590 "write": true, 00:15:31.590 "unmap": true, 00:15:31.590 "write_zeroes": true, 00:15:31.590 "flush": false, 00:15:31.590 "reset": true, 00:15:31.590 "compare": false, 00:15:31.590 "compare_and_write": false, 00:15:31.590 "abort": false, 00:15:31.590 "nvme_admin": false, 00:15:31.590 "nvme_io": false 00:15:31.590 }, 00:15:31.590 "driver_specific": { 00:15:31.590 "lvol": { 00:15:31.590 "lvol_store_uuid": "8d5588d7-d2b7-43cf-96b4-dd7923846e5e", 00:15:31.590 "base_bdev": "nvme0n1", 00:15:31.590 "thin_provision": true, 00:15:31.590 "snapshot": false, 00:15:31.590 "clone": false, 00:15:31.590 "esnap_clone": false 00:15:31.590 } 00:15:31.590 } 00:15:31.590 } 00:15:31.590 ]' 00:15:31.590 04:11:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:31.590 04:11:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:31.590 04:11:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:31.590 04:11:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:31.590 04:11:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:31.590 04:11:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:31.590 04:11:33 -- ftl/common.sh@48 -- # cache_size=5171 00:15:31.590 04:11:33 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:31.848 04:11:33 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:15:31.848 04:11:33 -- ftl/restore.sh@48 -- # get_bdev_size 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.848 04:11:33 -- common/autotest_common.sh@1367 -- # local bdev_name=68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.848 04:11:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:31.848 04:11:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:31.848 04:11:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:31.848 04:11:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 00:15:31.848 04:11:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:31.848 { 00:15:31.848 "name": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.848 "aliases": [ 00:15:31.848 "lvs/nvme0n1p0" 00:15:31.848 ], 00:15:31.848 "product_name": "Logical Volume", 00:15:31.848 "block_size": 4096, 00:15:31.848 "num_blocks": 26476544, 00:15:31.848 "uuid": "68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9", 00:15:31.848 "assigned_rate_limits": { 00:15:31.848 "rw_ios_per_sec": 0, 00:15:31.848 "rw_mbytes_per_sec": 0, 00:15:31.848 "r_mbytes_per_sec": 0, 00:15:31.848 "w_mbytes_per_sec": 0 00:15:31.848 }, 00:15:31.848 "claimed": false, 00:15:31.848 "zoned": false, 00:15:31.848 "supported_io_types": { 00:15:31.848 "read": true, 00:15:31.848 "write": true, 00:15:31.848 "unmap": true, 00:15:31.848 "write_zeroes": true, 00:15:31.848 "flush": false, 00:15:31.848 "reset": true, 00:15:31.848 "compare": false, 00:15:31.848 "compare_and_write": false, 00:15:31.848 "abort": false, 00:15:31.848 "nvme_admin": false, 00:15:31.848 "nvme_io": false 00:15:31.848 }, 00:15:31.848 "driver_specific": { 00:15:31.848 "lvol": { 00:15:31.848 "lvol_store_uuid": "8d5588d7-d2b7-43cf-96b4-dd7923846e5e", 00:15:31.848 "base_bdev": "nvme0n1", 00:15:31.848 "thin_provision": true, 00:15:31.848 "snapshot": false, 00:15:31.848 "clone": false, 00:15:31.848 "esnap_clone": false 00:15:31.848 } 00:15:31.848 } 00:15:31.848 } 00:15:31.848 ]' 00:15:31.848 04:11:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:32.108 04:11:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:32.108 04:11:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:32.108 04:11:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:32.108 04:11:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:32.108 04:11:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:32.108 04:11:33 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:15:32.108 04:11:33 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 --l2p_dram_limit 10' 00:15:32.108 04:11:33 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:15:32.108 04:11:33 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:32.108 04:11:33 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:15:32.108 04:11:33 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:15:32.108 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:15:32.108 04:11:33 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 68a7e7d2-543f-4ab0-bbbe-eb94930aa6f9 --l2p_dram_limit 10 -c nvc0n1p0 00:15:32.108 [2024-11-26 04:11:33.831188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.831367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:32.108 [2024-11-26 04:11:33.831386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:32.108 [2024-11-26 04:11:33.831394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.831451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.831459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:32.108 [2024-11-26 04:11:33.831469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:32.108 [2024-11-26 04:11:33.831475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.831493] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:32.108 [2024-11-26 04:11:33.831723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:32.108 [2024-11-26 04:11:33.831737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.831743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:32.108 [2024-11-26 04:11:33.831751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:15:32.108 [2024-11-26 04:11:33.831756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.831805] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:15:32.108 [2024-11-26 04:11:33.832787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.832805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:32.108 [2024-11-26 04:11:33.832813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:32.108 [2024-11-26 04:11:33.832820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.837666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.837777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:32.108 [2024-11-26 04:11:33.837789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:15:32.108 [2024-11-26 04:11:33.837798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.837868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.837876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:32.108 [2024-11-26 04:11:33.837883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:32.108 [2024-11-26 04:11:33.837890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.837927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.837937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:32.108 [2024-11-26 04:11:33.837946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:32.108 [2024-11-26 04:11:33.837953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.837970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:32.108 [2024-11-26 04:11:33.839254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.839279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:32.108 [2024-11-26 04:11:33.839292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:15:32.108 [2024-11-26 04:11:33.839298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.839327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.839334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:32.108 [2024-11-26 04:11:33.839344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:32.108 [2024-11-26 04:11:33.839349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.839369] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:32.108 [2024-11-26 04:11:33.839456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:32.108 [2024-11-26 04:11:33.839466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:32.108 [2024-11-26 04:11:33.839475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:32.108 [2024-11-26 04:11:33.839490] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839497] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839519] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:32.108 [2024-11-26 04:11:33.839524] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:32.108 [2024-11-26 04:11:33.839531] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:32.108 [2024-11-26 04:11:33.839536] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:32.108 [2024-11-26 04:11:33.839543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.839549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:32.108 [2024-11-26 04:11:33.839557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:15:32.108 [2024-11-26 04:11:33.839562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.839613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.108 [2024-11-26 04:11:33.839620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:32.108 [2024-11-26 04:11:33.839626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:32.108 [2024-11-26 04:11:33.839631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.108 [2024-11-26 04:11:33.839690] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:32.108 [2024-11-26 04:11:33.839698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:32.108 [2024-11-26 04:11:33.839705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:32.108 [2024-11-26 04:11:33.839723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:32.108 [2024-11-26 04:11:33.839742] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.108 [2024-11-26 04:11:33.839754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:32.108 [2024-11-26 04:11:33.839759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:32.108 [2024-11-26 04:11:33.839767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:32.108 [2024-11-26 04:11:33.839772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:32.108 [2024-11-26 04:11:33.839779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:32.108 [2024-11-26 04:11:33.839784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:32.108 [2024-11-26 04:11:33.839796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:32.108 [2024-11-26 04:11:33.839802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:32.108 [2024-11-26 04:11:33.839813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:32.108 [2024-11-26 04:11:33.839818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:32.108 [2024-11-26 04:11:33.839829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:32.108 [2024-11-26 04:11:33.839847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:32.108 [2024-11-26 04:11:33.839851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.108 [2024-11-26 04:11:33.839858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:32.109 [2024-11-26 04:11:33.839863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:32.109 [2024-11-26 04:11:33.839869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.109 [2024-11-26 04:11:33.839874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:32.109 [2024-11-26 04:11:33.839880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:32.109 [2024-11-26 04:11:33.839885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:32.109 [2024-11-26 04:11:33.839891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:32.109 [2024-11-26 04:11:33.839896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:32.109 [2024-11-26 04:11:33.839903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.109 [2024-11-26 04:11:33.839908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:32.109 [2024-11-26 04:11:33.839915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:32.109 [2024-11-26 04:11:33.839920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:32.109 [2024-11-26 04:11:33.839927] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:32.109 [2024-11-26 04:11:33.839935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:32.109 [2024-11-26 04:11:33.839943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:32.109 [2024-11-26 04:11:33.839949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:32.109 [2024-11-26 04:11:33.839957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:32.109 [2024-11-26 04:11:33.839963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:32.109 [2024-11-26 04:11:33.839972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:32.109 [2024-11-26 04:11:33.839979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:32.109 [2024-11-26 04:11:33.839985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:32.109 [2024-11-26 04:11:33.839991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:32.109 [2024-11-26 04:11:33.839999] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:32.109 [2024-11-26 04:11:33.840007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.109 [2024-11-26 04:11:33.840015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:32.109 [2024-11-26 04:11:33.840021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:32.109 [2024-11-26 04:11:33.840029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:32.109 [2024-11-26 04:11:33.840034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:32.109 [2024-11-26 04:11:33.840042] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:32.109 [2024-11-26 04:11:33.840048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:32.109 [2024-11-26 04:11:33.840057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:32.109 [2024-11-26 04:11:33.840062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:32.109 [2024-11-26 04:11:33.840071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:32.109 [2024-11-26 04:11:33.840077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:32.109 [2024-11-26 04:11:33.840084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:32.109 [2024-11-26 04:11:33.840090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:32.109 [2024-11-26 04:11:33.840098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:32.109 [2024-11-26 04:11:33.840104] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:32.109 [2024-11-26 04:11:33.840115] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:32.109 [2024-11-26 04:11:33.840121] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:32.109 [2024-11-26 04:11:33.840129] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:32.109 [2024-11-26 04:11:33.840135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:32.109 [2024-11-26 04:11:33.840142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:32.109 [2024-11-26 04:11:33.840148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.840155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:32.109 [2024-11-26 04:11:33.840164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:15:32.109 [2024-11-26 04:11:33.840172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.845708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.845803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:32.109 [2024-11-26 04:11:33.845847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.501 ms 00:15:32.109 [2024-11-26 04:11:33.845867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.845950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.846022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:32.109 [2024-11-26 04:11:33.846045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:32.109 [2024-11-26 04:11:33.846062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.853847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.853945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:32.109 [2024-11-26 04:11:33.853989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.746 ms 00:15:32.109 [2024-11-26 04:11:33.854008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.854041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.854125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:32.109 [2024-11-26 04:11:33.854143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:32.109 [2024-11-26 04:11:33.854159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.854473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.854568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:32.109 [2024-11-26 04:11:33.854615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:15:32.109 [2024-11-26 04:11:33.854634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.854726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.854750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:32.109 [2024-11-26 04:11:33.854791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:32.109 [2024-11-26 04:11:33.854809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.859601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.859688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:32.109 [2024-11-26 04:11:33.859731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:15:32.109 [2024-11-26 04:11:33.859750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.109 [2024-11-26 04:11:33.866274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:15:32.109 [2024-11-26 04:11:33.868607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.109 [2024-11-26 04:11:33.868689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:32.109 [2024-11-26 04:11:33.868736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.800 ms 00:15:32.109 [2024-11-26 04:11:33.868759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.369 [2024-11-26 04:11:33.933738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.369 [2024-11-26 04:11:33.933950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:32.369 [2024-11-26 04:11:33.934018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.936 ms 00:15:32.369 [2024-11-26 04:11:33.934107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.369 [2024-11-26 04:11:33.934164] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:32.369 [2024-11-26 04:11:33.934226] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:34.896 [2024-11-26 04:11:36.337122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.896 [2024-11-26 04:11:36.337348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:34.896 [2024-11-26 04:11:36.337424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2402.921 ms 00:15:34.896 [2024-11-26 04:11:36.337450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.896 [2024-11-26 04:11:36.337659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.896 [2024-11-26 04:11:36.337688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:34.897 [2024-11-26 04:11:36.337747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:15:34.897 [2024-11-26 04:11:36.337770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.340807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.340938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:34.897 [2024-11-26 04:11:36.341019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:15:34.897 [2024-11-26 04:11:36.341061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.343664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.343786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:34.897 [2024-11-26 04:11:36.343864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.443 ms 00:15:34.897 [2024-11-26 04:11:36.343887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.344231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.344316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:34.897 [2024-11-26 04:11:36.344332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:15:34.897 [2024-11-26 04:11:36.344344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.366326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.366457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:34.897 [2024-11-26 04:11:36.366541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.946 ms 00:15:34.897 [2024-11-26 04:11:36.366567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.370377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.370485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:34.897 [2024-11-26 04:11:36.370557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:15:34.897 [2024-11-26 04:11:36.370607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.371775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.371876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:34.897 [2024-11-26 04:11:36.371928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:15:34.897 [2024-11-26 04:11:36.371950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.375405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.375531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:34.897 [2024-11-26 04:11:36.375590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:15:34.897 [2024-11-26 04:11:36.375644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.375698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.375747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:34.897 [2024-11-26 04:11:36.375800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:34.897 [2024-11-26 04:11:36.375822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.375923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.897 [2024-11-26 04:11:36.375987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:34.897 [2024-11-26 04:11:36.376037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:34.897 [2024-11-26 04:11:36.376059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.897 [2024-11-26 04:11:36.376905] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2545.316 ms, result 0 00:15:34.897 { 00:15:34.897 "name": "ftl0", 00:15:34.897 "uuid": "67d4fab1-ad45-4e8d-83b0-1fc3f149cedb" 00:15:34.897 } 00:15:34.897 04:11:36 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:15:34.897 04:11:36 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:34.897 04:11:36 -- ftl/restore.sh@63 -- # echo ']}' 00:15:34.897 04:11:36 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:35.157 [2024-11-26 04:11:36.761726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.761950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:35.157 [2024-11-26 04:11:36.762010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:35.157 [2024-11-26 04:11:36.762035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.762079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:35.157 [2024-11-26 04:11:36.762553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.762645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:35.157 [2024-11-26 04:11:36.762704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:15:35.157 [2024-11-26 04:11:36.762726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.762999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.763113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:35.157 [2024-11-26 04:11:36.763159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:15:35.157 [2024-11-26 04:11:36.763181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.766448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.766535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:35.157 [2024-11-26 04:11:36.766589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:15:35.157 [2024-11-26 04:11:36.766610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.772731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.772827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:35.157 [2024-11-26 04:11:36.772879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.074 ms 00:15:35.157 [2024-11-26 04:11:36.772905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.774521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.774624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:35.157 [2024-11-26 04:11:36.774677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:15:35.157 [2024-11-26 04:11:36.774700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.778901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.779008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:35.157 [2024-11-26 04:11:36.779060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:15:35.157 [2024-11-26 04:11:36.779082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.779217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.779255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:35.157 [2024-11-26 04:11:36.779278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:35.157 [2024-11-26 04:11:36.779323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.781148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.781242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:35.157 [2024-11-26 04:11:36.781259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:15:35.157 [2024-11-26 04:11:36.781266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.782693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.782773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:35.157 [2024-11-26 04:11:36.782822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:15:35.157 [2024-11-26 04:11:36.782843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.783837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.783929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:35.157 [2024-11-26 04:11:36.783980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:15:35.157 [2024-11-26 04:11:36.784001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.785189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.157 [2024-11-26 04:11:36.785283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:35.157 [2024-11-26 04:11:36.785333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:15:35.157 [2024-11-26 04:11:36.785355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.157 [2024-11-26 04:11:36.785445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:35.157 [2024-11-26 04:11:36.785475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.785985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:35.157 [2024-11-26 04:11:36.786683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.786993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:35.158 [2024-11-26 04:11:36.787247] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:35.158 [2024-11-26 04:11:36.787255] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:15:35.158 [2024-11-26 04:11:36.787263] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:35.158 [2024-11-26 04:11:36.787271] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:35.158 [2024-11-26 04:11:36.787277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:35.158 [2024-11-26 04:11:36.787287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:35.158 [2024-11-26 04:11:36.787294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:35.158 [2024-11-26 04:11:36.787309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:35.158 [2024-11-26 04:11:36.787316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:35.158 [2024-11-26 04:11:36.787324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:35.158 [2024-11-26 04:11:36.787329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:35.158 [2024-11-26 04:11:36.787338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.158 [2024-11-26 04:11:36.787345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:35.158 [2024-11-26 04:11:36.787354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:15:35.158 [2024-11-26 04:11:36.787363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.158 [2024-11-26 04:11:36.790491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.158 [2024-11-26 04:11:36.790815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:35.158 [2024-11-26 04:11:36.790994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:15:35.158 [2024-11-26 04:11:36.791145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.158 [2024-11-26 04:11:36.791411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:35.158 [2024-11-26 04:11:36.791451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:35.158 [2024-11-26 04:11:36.791488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:35.158 [2024-11-26 04:11:36.791540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.158 [2024-11-26 04:11:36.801842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.158 [2024-11-26 04:11:36.801884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:35.158 [2024-11-26 04:11:36.801896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.158 [2024-11-26 04:11:36.801903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.801961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.801969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:35.159 [2024-11-26 04:11:36.801980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.801987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.802062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.802072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:35.159 [2024-11-26 04:11:36.802082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.802089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.802108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.802115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:35.159 [2024-11-26 04:11:36.802125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.802133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.810981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.811026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:35.159 [2024-11-26 04:11:36.811038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.811046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:35.159 [2024-11-26 04:11:36.814587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:35.159 [2024-11-26 04:11:36.814662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:35.159 [2024-11-26 04:11:36.814732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:35.159 [2024-11-26 04:11:36.814828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:35.159 [2024-11-26 04:11:36.814881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.814935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:35.159 [2024-11-26 04:11:36.814944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.814951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.814995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:35.159 [2024-11-26 04:11:36.815004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:35.159 [2024-11-26 04:11:36.815013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:35.159 [2024-11-26 04:11:36.815020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:35.159 [2024-11-26 04:11:36.815154] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.385 ms, result 0 00:15:35.159 true 00:15:35.159 04:11:36 -- ftl/restore.sh@66 -- # killprocess 83654 00:15:35.159 04:11:36 -- common/autotest_common.sh@936 -- # '[' -z 83654 ']' 00:15:35.159 04:11:36 -- common/autotest_common.sh@940 -- # kill -0 83654 00:15:35.159 04:11:36 -- common/autotest_common.sh@941 -- # uname 00:15:35.159 04:11:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.159 04:11:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83654 00:15:35.159 killing process with pid 83654 00:15:35.159 04:11:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.159 04:11:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.159 04:11:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83654' 00:15:35.159 04:11:36 -- common/autotest_common.sh@955 -- # kill 83654 00:15:35.159 04:11:36 -- common/autotest_common.sh@960 -- # wait 83654 00:15:40.424 04:11:41 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:15:44.612 262144+0 records in 00:15:44.612 262144+0 records out 00:15:44.612 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15079 s, 259 MB/s 00:15:44.613 04:11:45 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:15:45.989 04:11:47 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:45.989 [2024-11-26 04:11:47.608867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:45.989 [2024-11-26 04:11:47.608961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83898 ] 00:15:45.989 [2024-11-26 04:11:47.749954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.249 [2024-11-26 04:11:47.780275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.249 [2024-11-26 04:11:47.864086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:46.249 [2024-11-26 04:11:47.864151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:46.249 [2024-11-26 04:11:48.009834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.249 [2024-11-26 04:11:48.009891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:46.249 [2024-11-26 04:11:48.009908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:46.249 [2024-11-26 04:11:48.009916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.249 [2024-11-26 04:11:48.009966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.249 [2024-11-26 04:11:48.009976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:46.249 [2024-11-26 04:11:48.009985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:15:46.249 [2024-11-26 04:11:48.009994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.249 [2024-11-26 04:11:48.010013] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:46.249 [2024-11-26 04:11:48.010244] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:46.249 [2024-11-26 04:11:48.010257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.249 [2024-11-26 04:11:48.010267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:46.249 [2024-11-26 04:11:48.010279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:15:46.249 [2024-11-26 04:11:48.010286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.508 [2024-11-26 04:11:48.011384] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:46.508 [2024-11-26 04:11:48.013678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.508 [2024-11-26 04:11:48.013823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:46.508 [2024-11-26 04:11:48.013842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:15:46.508 [2024-11-26 04:11:48.013854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.508 [2024-11-26 04:11:48.013899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.013909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:46.509 [2024-11-26 04:11:48.013917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:46.509 [2024-11-26 04:11:48.013924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.018575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.018605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:46.509 [2024-11-26 04:11:48.018614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:15:46.509 [2024-11-26 04:11:48.018621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.018684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.018696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:46.509 [2024-11-26 04:11:48.018704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:46.509 [2024-11-26 04:11:48.018711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.018755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.018768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:46.509 [2024-11-26 04:11:48.018778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:46.509 [2024-11-26 04:11:48.018784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.018807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:46.509 [2024-11-26 04:11:48.020071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.020101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:46.509 [2024-11-26 04:11:48.020110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:15:46.509 [2024-11-26 04:11:48.020117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.020146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.020154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:46.509 [2024-11-26 04:11:48.020164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:46.509 [2024-11-26 04:11:48.020170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.020189] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:46.509 [2024-11-26 04:11:48.020206] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:46.509 [2024-11-26 04:11:48.020241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:46.509 [2024-11-26 04:11:48.020256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:46.509 [2024-11-26 04:11:48.020327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:46.509 [2024-11-26 04:11:48.020339] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:46.509 [2024-11-26 04:11:48.020352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:46.509 [2024-11-26 04:11:48.020361] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020370] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020379] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:46.509 [2024-11-26 04:11:48.020386] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:46.509 [2024-11-26 04:11:48.020393] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:46.509 [2024-11-26 04:11:48.020400] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:46.509 [2024-11-26 04:11:48.020407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.020414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:46.509 [2024-11-26 04:11:48.020422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:15:46.509 [2024-11-26 04:11:48.020430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.020489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.509 [2024-11-26 04:11:48.020496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:46.509 [2024-11-26 04:11:48.020521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:46.509 [2024-11-26 04:11:48.020528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.509 [2024-11-26 04:11:48.020603] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:46.509 [2024-11-26 04:11:48.020615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:46.509 [2024-11-26 04:11:48.020623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020644] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:46.509 [2024-11-26 04:11:48.020650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:46.509 [2024-11-26 04:11:48.020669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.509 [2024-11-26 04:11:48.020682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:46.509 [2024-11-26 04:11:48.020690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:46.509 [2024-11-26 04:11:48.020697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.509 [2024-11-26 04:11:48.020708] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:46.509 [2024-11-26 04:11:48.020715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:46.509 [2024-11-26 04:11:48.020721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:46.509 [2024-11-26 04:11:48.020736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:46.509 [2024-11-26 04:11:48.020743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:46.509 [2024-11-26 04:11:48.020759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:46.509 [2024-11-26 04:11:48.020767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:46.509 [2024-11-26 04:11:48.020782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:46.509 [2024-11-26 04:11:48.020803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:46.509 [2024-11-26 04:11:48.020825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:46.509 [2024-11-26 04:11:48.020847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.509 [2024-11-26 04:11:48.020861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:46.509 [2024-11-26 04:11:48.020872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:46.509 [2024-11-26 04:11:48.020880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.509 [2024-11-26 04:11:48.020887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:46.509 [2024-11-26 04:11:48.020894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:46.510 [2024-11-26 04:11:48.020901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.510 [2024-11-26 04:11:48.020908] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:46.510 [2024-11-26 04:11:48.020916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:46.510 [2024-11-26 04:11:48.020924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.510 [2024-11-26 04:11:48.020936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.510 [2024-11-26 04:11:48.020945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:46.510 [2024-11-26 04:11:48.020952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:46.510 [2024-11-26 04:11:48.020959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:46.510 [2024-11-26 04:11:48.020967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:46.510 [2024-11-26 04:11:48.020974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:46.510 [2024-11-26 04:11:48.020982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:46.510 [2024-11-26 04:11:48.020991] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:46.510 [2024-11-26 04:11:48.021002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.510 [2024-11-26 04:11:48.021011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:46.510 [2024-11-26 04:11:48.021020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:46.510 [2024-11-26 04:11:48.021027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:46.510 [2024-11-26 04:11:48.021035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:46.510 [2024-11-26 04:11:48.021043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:46.510 [2024-11-26 04:11:48.021051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:46.510 [2024-11-26 04:11:48.021059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:46.510 [2024-11-26 04:11:48.021067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:46.510 [2024-11-26 04:11:48.021075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:46.510 [2024-11-26 04:11:48.021083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:46.510 [2024-11-26 04:11:48.021099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:46.510 [2024-11-26 04:11:48.021107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:46.510 [2024-11-26 04:11:48.021116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:46.510 [2024-11-26 04:11:48.021123] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:46.510 [2024-11-26 04:11:48.021131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.510 [2024-11-26 04:11:48.021140] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:46.510 [2024-11-26 04:11:48.021147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:46.510 [2024-11-26 04:11:48.021154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:46.510 [2024-11-26 04:11:48.021161] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:46.510 [2024-11-26 04:11:48.021168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.021175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:46.510 [2024-11-26 04:11:48.021182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:15:46.510 [2024-11-26 04:11:48.021191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.026971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.026999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:46.510 [2024-11-26 04:11:48.027008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.743 ms 00:15:46.510 [2024-11-26 04:11:48.027016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.027097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.027105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:46.510 [2024-11-26 04:11:48.027112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:46.510 [2024-11-26 04:11:48.027120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.045079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.045154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:46.510 [2024-11-26 04:11:48.045175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.915 ms 00:15:46.510 [2024-11-26 04:11:48.045189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.045254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.045270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:46.510 [2024-11-26 04:11:48.045285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:46.510 [2024-11-26 04:11:48.045307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.045788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.045823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:46.510 [2024-11-26 04:11:48.045841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:15:46.510 [2024-11-26 04:11:48.045855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.046062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.046088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:46.510 [2024-11-26 04:11:48.046104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:15:46.510 [2024-11-26 04:11:48.046118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.052870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.053081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:46.510 [2024-11-26 04:11:48.053138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.715 ms 00:15:46.510 [2024-11-26 04:11:48.053152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.055543] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:46.510 [2024-11-26 04:11:48.055569] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:46.510 [2024-11-26 04:11:48.055582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.055589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:46.510 [2024-11-26 04:11:48.055598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:15:46.510 [2024-11-26 04:11:48.055605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.070257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.070307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:46.510 [2024-11-26 04:11:48.070325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.362 ms 00:15:46.510 [2024-11-26 04:11:48.070333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.510 [2024-11-26 04:11:48.072064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.510 [2024-11-26 04:11:48.072185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:46.511 [2024-11-26 04:11:48.072200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:15:46.511 [2024-11-26 04:11:48.072207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.073486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.073521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:46.511 [2024-11-26 04:11:48.073530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:15:46.511 [2024-11-26 04:11:48.073541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.073729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.073739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:46.511 [2024-11-26 04:11:48.073747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:46.511 [2024-11-26 04:11:48.073754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.090947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.091130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:46.511 [2024-11-26 04:11:48.091147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.172 ms 00:15:46.511 [2024-11-26 04:11:48.091155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.098486] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:15:46.511 [2024-11-26 04:11:48.101015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.101046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:46.511 [2024-11-26 04:11:48.101061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.827 ms 00:15:46.511 [2024-11-26 04:11:48.101070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.101154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.101165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:46.511 [2024-11-26 04:11:48.101175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:46.511 [2024-11-26 04:11:48.101183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.101236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.101249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:46.511 [2024-11-26 04:11:48.101259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:46.511 [2024-11-26 04:11:48.101266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.102480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.102603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:46.511 [2024-11-26 04:11:48.102623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:15:46.511 [2024-11-26 04:11:48.102631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.102663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.102670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:46.511 [2024-11-26 04:11:48.102678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:46.511 [2024-11-26 04:11:48.102688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.102721] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:46.511 [2024-11-26 04:11:48.102730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.102737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:46.511 [2024-11-26 04:11:48.102744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:46.511 [2024-11-26 04:11:48.102754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.106076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.106183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:46.511 [2024-11-26 04:11:48.106197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:15:46.511 [2024-11-26 04:11:48.106205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.106268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.511 [2024-11-26 04:11:48.106277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:46.511 [2024-11-26 04:11:48.106289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:46.511 [2024-11-26 04:11:48.106302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.511 [2024-11-26 04:11:48.107237] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.024 ms, result 0 00:15:47.447  [2024-11-26T04:11:50.148Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-26T04:11:51.521Z] Copying: 89/1024 [MB] (45 MBps) [2024-11-26T04:11:52.452Z] Copying: 135/1024 [MB] (45 MBps) [2024-11-26T04:11:53.387Z] Copying: 177/1024 [MB] (42 MBps) [2024-11-26T04:11:54.320Z] Copying: 222/1024 [MB] (44 MBps) [2024-11-26T04:11:55.254Z] Copying: 266/1024 [MB] (44 MBps) [2024-11-26T04:11:56.190Z] Copying: 315/1024 [MB] (48 MBps) [2024-11-26T04:11:57.125Z] Copying: 362/1024 [MB] (46 MBps) [2024-11-26T04:11:58.501Z] Copying: 407/1024 [MB] (45 MBps) [2024-11-26T04:11:59.434Z] Copying: 452/1024 [MB] (44 MBps) [2024-11-26T04:12:00.369Z] Copying: 498/1024 [MB] (45 MBps) [2024-11-26T04:12:01.302Z] Copying: 543/1024 [MB] (44 MBps) [2024-11-26T04:12:02.237Z] Copying: 588/1024 [MB] (45 MBps) [2024-11-26T04:12:03.170Z] Copying: 634/1024 [MB] (45 MBps) [2024-11-26T04:12:04.544Z] Copying: 679/1024 [MB] (45 MBps) [2024-11-26T04:12:05.478Z] Copying: 724/1024 [MB] (45 MBps) [2024-11-26T04:12:06.442Z] Copying: 769/1024 [MB] (45 MBps) [2024-11-26T04:12:07.376Z] Copying: 815/1024 [MB] (45 MBps) [2024-11-26T04:12:08.305Z] Copying: 863/1024 [MB] (47 MBps) [2024-11-26T04:12:09.238Z] Copying: 909/1024 [MB] (45 MBps) [2024-11-26T04:12:10.281Z] Copying: 955/1024 [MB] (45 MBps) [2024-11-26T04:12:10.849Z] Copying: 1001/1024 [MB] (45 MBps) [2024-11-26T04:12:10.849Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-26 04:12:10.603053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.603101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:09.081 [2024-11-26 04:12:10.603114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:09.081 [2024-11-26 04:12:10.603123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.603142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:09.081 [2024-11-26 04:12:10.603595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.603618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:09.081 [2024-11-26 04:12:10.603630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:16:09.081 [2024-11-26 04:12:10.603641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.604997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.605121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:09.081 [2024-11-26 04:12:10.605137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:16:09.081 [2024-11-26 04:12:10.605146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.617048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.617077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:09.081 [2024-11-26 04:12:10.617096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.886 ms 00:16:09.081 [2024-11-26 04:12:10.617104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.623175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.623200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:09.081 [2024-11-26 04:12:10.623210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.044 ms 00:16:09.081 [2024-11-26 04:12:10.623225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.624521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.624548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:09.081 [2024-11-26 04:12:10.624556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:16:09.081 [2024-11-26 04:12:10.624563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.627942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.627973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:09.081 [2024-11-26 04:12:10.627986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:16:09.081 [2024-11-26 04:12:10.627993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.628086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.628094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:09.081 [2024-11-26 04:12:10.628102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:16:09.081 [2024-11-26 04:12:10.628109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.629716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.629742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:09.081 [2024-11-26 04:12:10.629750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:16:09.081 [2024-11-26 04:12:10.629757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.630861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.630971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:09.081 [2024-11-26 04:12:10.630985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:09.081 [2024-11-26 04:12:10.630992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.631941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.631974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:09.081 [2024-11-26 04:12:10.631982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:16:09.081 [2024-11-26 04:12:10.631989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.632950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.081 [2024-11-26 04:12:10.632977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:09.081 [2024-11-26 04:12:10.632985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:16:09.081 [2024-11-26 04:12:10.632992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.081 [2024-11-26 04:12:10.633016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:09.081 [2024-11-26 04:12:10.633035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:09.081 [2024-11-26 04:12:10.633119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:09.082 [2024-11-26 04:12:10.633821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:09.083 [2024-11-26 04:12:10.633832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:16:09.083 [2024-11-26 04:12:10.633839] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:09.083 [2024-11-26 04:12:10.633846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:09.083 [2024-11-26 04:12:10.633856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:09.083 [2024-11-26 04:12:10.633863] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:09.083 [2024-11-26 04:12:10.633870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:09.083 [2024-11-26 04:12:10.633877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:09.083 [2024-11-26 04:12:10.633884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:09.083 [2024-11-26 04:12:10.633889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:09.083 [2024-11-26 04:12:10.633896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:09.083 [2024-11-26 04:12:10.633903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.083 [2024-11-26 04:12:10.633913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:09.083 [2024-11-26 04:12:10.633921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:16:09.083 [2024-11-26 04:12:10.633930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.635228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.083 [2024-11-26 04:12:10.635247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:09.083 [2024-11-26 04:12:10.635256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:16:09.083 [2024-11-26 04:12:10.635262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.635331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.083 [2024-11-26 04:12:10.635344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:09.083 [2024-11-26 04:12:10.635352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:09.083 [2024-11-26 04:12:10.635360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.640085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.640116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:09.083 [2024-11-26 04:12:10.640125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.640132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.640179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.640187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:09.083 [2024-11-26 04:12:10.640194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.640204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.640253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.640263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:09.083 [2024-11-26 04:12:10.640270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.640277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.640291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.640298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:09.083 [2024-11-26 04:12:10.640305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.640312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.648312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.648352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:09.083 [2024-11-26 04:12:10.648361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.648369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.651973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:09.083 [2024-11-26 04:12:10.652017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:09.083 [2024-11-26 04:12:10.652091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:09.083 [2024-11-26 04:12:10.652141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:09.083 [2024-11-26 04:12:10.652223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:09.083 [2024-11-26 04:12:10.652270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:09.083 [2024-11-26 04:12:10.652329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:09.083 [2024-11-26 04:12:10.652380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:09.083 [2024-11-26 04:12:10.652387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:09.083 [2024-11-26 04:12:10.652394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.083 [2024-11-26 04:12:10.652528] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.426 ms, result 0 00:16:09.342 00:16:09.342 00:16:09.342 04:12:10 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:16:09.342 [2024-11-26 04:12:11.014732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.342 [2024-11-26 04:12:11.014990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84148 ] 00:16:09.600 [2024-11-26 04:12:11.162031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.600 [2024-11-26 04:12:11.191933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.600 [2024-11-26 04:12:11.274396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:09.600 [2024-11-26 04:12:11.274466] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:09.860 [2024-11-26 04:12:11.419329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.419379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:09.860 [2024-11-26 04:12:11.419392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:09.860 [2024-11-26 04:12:11.419400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.419453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.419463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:09.860 [2024-11-26 04:12:11.419471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:09.860 [2024-11-26 04:12:11.419480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.419499] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:09.860 [2024-11-26 04:12:11.419748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:09.860 [2024-11-26 04:12:11.419764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.419774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:09.860 [2024-11-26 04:12:11.419782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:16:09.860 [2024-11-26 04:12:11.419789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.421025] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:09.860 [2024-11-26 04:12:11.423268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.423308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:09.860 [2024-11-26 04:12:11.423321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:16:09.860 [2024-11-26 04:12:11.423329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.423376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.423386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:09.860 [2024-11-26 04:12:11.423398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:09.860 [2024-11-26 04:12:11.423407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.427953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.427987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:09.860 [2024-11-26 04:12:11.427996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.487 ms 00:16:09.860 [2024-11-26 04:12:11.428006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.428073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.428082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:09.860 [2024-11-26 04:12:11.428090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:09.860 [2024-11-26 04:12:11.428097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.428139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.428149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:09.860 [2024-11-26 04:12:11.428160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:09.860 [2024-11-26 04:12:11.428167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.428197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:09.860 [2024-11-26 04:12:11.429479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.429521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:09.860 [2024-11-26 04:12:11.429534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:16:09.860 [2024-11-26 04:12:11.429541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.429572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.860 [2024-11-26 04:12:11.429580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:09.860 [2024-11-26 04:12:11.429589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:09.860 [2024-11-26 04:12:11.429599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.860 [2024-11-26 04:12:11.429617] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:09.860 [2024-11-26 04:12:11.429634] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:09.860 [2024-11-26 04:12:11.429666] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:09.860 [2024-11-26 04:12:11.429680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:09.860 [2024-11-26 04:12:11.429753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:09.860 [2024-11-26 04:12:11.429765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:09.860 [2024-11-26 04:12:11.429777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:09.860 [2024-11-26 04:12:11.429786] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:09.860 [2024-11-26 04:12:11.429797] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:09.861 [2024-11-26 04:12:11.429807] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:09.861 [2024-11-26 04:12:11.429816] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:09.861 [2024-11-26 04:12:11.429829] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:09.861 [2024-11-26 04:12:11.429836] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:09.861 [2024-11-26 04:12:11.429846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.429853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:09.861 [2024-11-26 04:12:11.429861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:16:09.861 [2024-11-26 04:12:11.429869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.429927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.429935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:09.861 [2024-11-26 04:12:11.429942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:09.861 [2024-11-26 04:12:11.429949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.430016] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:09.861 [2024-11-26 04:12:11.430025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:09.861 [2024-11-26 04:12:11.430032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:09.861 [2024-11-26 04:12:11.430057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:09.861 [2024-11-26 04:12:11.430078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.861 [2024-11-26 04:12:11.430091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:09.861 [2024-11-26 04:12:11.430097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:09.861 [2024-11-26 04:12:11.430103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.861 [2024-11-26 04:12:11.430114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:09.861 [2024-11-26 04:12:11.430121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:09.861 [2024-11-26 04:12:11.430128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:09.861 [2024-11-26 04:12:11.430142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:09.861 [2024-11-26 04:12:11.430150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:09.861 [2024-11-26 04:12:11.430167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:09.861 [2024-11-26 04:12:11.430174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:09.861 [2024-11-26 04:12:11.430189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:09.861 [2024-11-26 04:12:11.430210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:09.861 [2024-11-26 04:12:11.430232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:09.861 [2024-11-26 04:12:11.430253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:09.861 [2024-11-26 04:12:11.430279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.861 [2024-11-26 04:12:11.430294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:09.861 [2024-11-26 04:12:11.430301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:09.861 [2024-11-26 04:12:11.430308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.861 [2024-11-26 04:12:11.430315] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:09.861 [2024-11-26 04:12:11.430323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:09.861 [2024-11-26 04:12:11.430330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.861 [2024-11-26 04:12:11.430347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:09.861 [2024-11-26 04:12:11.430355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:09.861 [2024-11-26 04:12:11.430363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:09.861 [2024-11-26 04:12:11.430371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:09.861 [2024-11-26 04:12:11.430378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:09.861 [2024-11-26 04:12:11.430385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:09.861 [2024-11-26 04:12:11.430393] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:09.861 [2024-11-26 04:12:11.430405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.861 [2024-11-26 04:12:11.430414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:09.861 [2024-11-26 04:12:11.430422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:09.861 [2024-11-26 04:12:11.430430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:09.861 [2024-11-26 04:12:11.430438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:09.861 [2024-11-26 04:12:11.430446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:09.861 [2024-11-26 04:12:11.430454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:09.861 [2024-11-26 04:12:11.430462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:09.861 [2024-11-26 04:12:11.430469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:09.861 [2024-11-26 04:12:11.430477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:09.861 [2024-11-26 04:12:11.430485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:09.861 [2024-11-26 04:12:11.430493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:09.861 [2024-11-26 04:12:11.430770] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:09.861 [2024-11-26 04:12:11.430818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:09.861 [2024-11-26 04:12:11.430846] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:09.861 [2024-11-26 04:12:11.430921] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.861 [2024-11-26 04:12:11.430957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:09.861 [2024-11-26 04:12:11.430986] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:09.861 [2024-11-26 04:12:11.431036] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:09.861 [2024-11-26 04:12:11.431142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:09.861 [2024-11-26 04:12:11.431205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.431231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:09.861 [2024-11-26 04:12:11.431250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:16:09.861 [2024-11-26 04:12:11.431272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.436977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.437077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:09.861 [2024-11-26 04:12:11.437142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:16:09.861 [2024-11-26 04:12:11.437194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.437289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.437380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:09.861 [2024-11-26 04:12:11.437430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:09.861 [2024-11-26 04:12:11.437457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.452622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.452745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:09.861 [2024-11-26 04:12:11.452802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.098 ms 00:16:09.861 [2024-11-26 04:12:11.452813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.861 [2024-11-26 04:12:11.452857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.861 [2024-11-26 04:12:11.452867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:09.861 [2024-11-26 04:12:11.452875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:09.861 [2024-11-26 04:12:11.452887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.453220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.453235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:09.862 [2024-11-26 04:12:11.453248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:09.862 [2024-11-26 04:12:11.453256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.453385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.453396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:09.862 [2024-11-26 04:12:11.453406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:09.862 [2024-11-26 04:12:11.453415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.459096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.459222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:09.862 [2024-11-26 04:12:11.459239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.653 ms 00:16:09.862 [2024-11-26 04:12:11.459256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.461739] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:09.862 [2024-11-26 04:12:11.461778] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:09.862 [2024-11-26 04:12:11.461795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.461804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:09.862 [2024-11-26 04:12:11.461814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.447 ms 00:16:09.862 [2024-11-26 04:12:11.461822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.476723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.476759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:09.862 [2024-11-26 04:12:11.476769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.858 ms 00:16:09.862 [2024-11-26 04:12:11.476776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.478420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.478537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:09.862 [2024-11-26 04:12:11.478551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:16:09.862 [2024-11-26 04:12:11.478559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.479855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.479885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:09.862 [2024-11-26 04:12:11.479893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:16:09.862 [2024-11-26 04:12:11.479903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.480087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.480098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:09.862 [2024-11-26 04:12:11.480106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:09.862 [2024-11-26 04:12:11.480115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.496938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.496988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:09.862 [2024-11-26 04:12:11.497000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.803 ms 00:16:09.862 [2024-11-26 04:12:11.497008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.504265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:09.862 [2024-11-26 04:12:11.506521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.506547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:09.862 [2024-11-26 04:12:11.506557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.469 ms 00:16:09.862 [2024-11-26 04:12:11.506565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.506628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.506638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:09.862 [2024-11-26 04:12:11.506647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:09.862 [2024-11-26 04:12:11.506655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.506705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.506720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:09.862 [2024-11-26 04:12:11.506729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:09.862 [2024-11-26 04:12:11.506738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.507938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.507969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:09.862 [2024-11-26 04:12:11.507978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:16:09.862 [2024-11-26 04:12:11.507985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.508010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.508018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:09.862 [2024-11-26 04:12:11.508030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.862 [2024-11-26 04:12:11.508037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.508080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:09.862 [2024-11-26 04:12:11.508094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.508101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:09.862 [2024-11-26 04:12:11.508108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:09.862 [2024-11-26 04:12:11.508117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.511415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.511446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:09.862 [2024-11-26 04:12:11.511456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:16:09.862 [2024-11-26 04:12:11.511468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.511545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.862 [2024-11-26 04:12:11.511554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:09.862 [2024-11-26 04:12:11.511565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:09.862 [2024-11-26 04:12:11.511572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.862 [2024-11-26 04:12:11.512736] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 93.003 ms, result 0 00:16:11.237  [2024-11-26T04:12:13.939Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-26T04:12:14.873Z] Copying: 97/1024 [MB] (48 MBps) [2024-11-26T04:12:15.806Z] Copying: 146/1024 [MB] (49 MBps) [2024-11-26T04:12:16.741Z] Copying: 195/1024 [MB] (48 MBps) [2024-11-26T04:12:18.115Z] Copying: 241/1024 [MB] (46 MBps) [2024-11-26T04:12:19.048Z] Copying: 290/1024 [MB] (48 MBps) [2024-11-26T04:12:19.982Z] Copying: 337/1024 [MB] (47 MBps) [2024-11-26T04:12:20.918Z] Copying: 388/1024 [MB] (50 MBps) [2024-11-26T04:12:21.853Z] Copying: 435/1024 [MB] (47 MBps) [2024-11-26T04:12:22.789Z] Copying: 482/1024 [MB] (46 MBps) [2024-11-26T04:12:23.723Z] Copying: 530/1024 [MB] (47 MBps) [2024-11-26T04:12:25.108Z] Copying: 579/1024 [MB] (49 MBps) [2024-11-26T04:12:26.043Z] Copying: 627/1024 [MB] (48 MBps) [2024-11-26T04:12:26.977Z] Copying: 679/1024 [MB] (51 MBps) [2024-11-26T04:12:27.912Z] Copying: 727/1024 [MB] (48 MBps) [2024-11-26T04:12:28.846Z] Copying: 777/1024 [MB] (49 MBps) [2024-11-26T04:12:29.788Z] Copying: 825/1024 [MB] (48 MBps) [2024-11-26T04:12:30.722Z] Copying: 874/1024 [MB] (49 MBps) [2024-11-26T04:12:32.095Z] Copying: 921/1024 [MB] (47 MBps) [2024-11-26T04:12:33.032Z] Copying: 971/1024 [MB] (49 MBps) [2024-11-26T04:12:33.032Z] Copying: 1022/1024 [MB] (51 MBps) [2024-11-26T04:12:33.032Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-26 04:12:32.811537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.811591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.264 [2024-11-26 04:12:32.811605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.264 [2024-11-26 04:12:32.811613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.811634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.264 [2024-11-26 04:12:32.812070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.812090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.264 [2024-11-26 04:12:32.812098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:16:31.264 [2024-11-26 04:12:32.812105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.812326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.812334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.264 [2024-11-26 04:12:32.812343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:31.264 [2024-11-26 04:12:32.812351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.815793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.815810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.264 [2024-11-26 04:12:32.815820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:16:31.264 [2024-11-26 04:12:32.815828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.821909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.822042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:31.264 [2024-11-26 04:12:32.822058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.066 ms 00:16:31.264 [2024-11-26 04:12:32.822065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.825937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.825963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.264 [2024-11-26 04:12:32.825971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:16:31.264 [2024-11-26 04:12:32.825978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.829406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.829437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.264 [2024-11-26 04:12:32.829445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:16:31.264 [2024-11-26 04:12:32.829452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.829579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.829589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.264 [2024-11-26 04:12:32.829598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:31.264 [2024-11-26 04:12:32.829611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.831441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.831460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:31.264 [2024-11-26 04:12:32.831468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:16:31.264 [2024-11-26 04:12:32.831475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.832989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.833104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:31.264 [2024-11-26 04:12:32.833168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:16:31.264 [2024-11-26 04:12:32.833194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.834094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.834187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:31.264 [2024-11-26 04:12:32.834241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:16:31.264 [2024-11-26 04:12:32.834262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.835193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.264 [2024-11-26 04:12:32.835279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:31.264 [2024-11-26 04:12:32.835326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:16:31.264 [2024-11-26 04:12:32.835346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.264 [2024-11-26 04:12:32.835373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:31.264 [2024-11-26 04:12:32.835463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.835982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:31.264 [2024-11-26 04:12:32.836892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.836920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.836986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:31.265 [2024-11-26 04:12:32.837999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:31.265 [2024-11-26 04:12:32.838007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:16:31.265 [2024-11-26 04:12:32.838015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:31.265 [2024-11-26 04:12:32.838021] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:31.265 [2024-11-26 04:12:32.838028] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:31.265 [2024-11-26 04:12:32.838036] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:31.265 [2024-11-26 04:12:32.838043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:31.265 [2024-11-26 04:12:32.838050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:31.265 [2024-11-26 04:12:32.838057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:31.265 [2024-11-26 04:12:32.838066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:31.265 [2024-11-26 04:12:32.838072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:31.265 [2024-11-26 04:12:32.838081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.265 [2024-11-26 04:12:32.838094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:31.265 [2024-11-26 04:12:32.838102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:16:31.265 [2024-11-26 04:12:32.838115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.265 [2024-11-26 04:12:32.839441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.265 [2024-11-26 04:12:32.839456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:31.265 [2024-11-26 04:12:32.839465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:16:31.265 [2024-11-26 04:12:32.839472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.265 [2024-11-26 04:12:32.839541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.265 [2024-11-26 04:12:32.839549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:31.265 [2024-11-26 04:12:32.839560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:31.265 [2024-11-26 04:12:32.839566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.265 [2024-11-26 04:12:32.846526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.265 [2024-11-26 04:12:32.846624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.265 [2024-11-26 04:12:32.846673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.265 [2024-11-26 04:12:32.846702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.265 [2024-11-26 04:12:32.846846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.265 [2024-11-26 04:12:32.846894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.265 [2024-11-26 04:12:32.846947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.846968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.847049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.847075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.266 [2024-11-26 04:12:32.847094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.847149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.847179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.847205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.266 [2024-11-26 04:12:32.847223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.847276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.855166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.855302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.266 [2024-11-26 04:12:32.855351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.855372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.858885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.858991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.266 [2024-11-26 04:12:32.859046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.859113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.859182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:31.266 [2024-11-26 04:12:32.859204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.859275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.859329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:31.266 [2024-11-26 04:12:32.859351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.859447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.859491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:31.266 [2024-11-26 04:12:32.859531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.859621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.859645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:31.266 [2024-11-26 04:12:32.859699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.859779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.859830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:31.266 [2024-11-26 04:12:32.859928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.859957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.860039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:31.266 [2024-11-26 04:12:32.860089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:31.266 [2024-11-26 04:12:32.860131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:31.266 [2024-11-26 04:12:32.860153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.266 [2024-11-26 04:12:32.860309] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.774 ms, result 0 00:16:31.524 00:16:31.524 00:16:31.524 04:12:33 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:34.056 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:16:34.056 04:12:35 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:16:34.056 [2024-11-26 04:12:35.337019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:34.056 [2024-11-26 04:12:35.337327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84403 ] 00:16:34.056 [2024-11-26 04:12:35.482986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.056 [2024-11-26 04:12:35.513829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.056 [2024-11-26 04:12:35.597238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.056 [2024-11-26 04:12:35.597485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.056 [2024-11-26 04:12:35.742435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.056 [2024-11-26 04:12:35.742485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.056 [2024-11-26 04:12:35.742498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:34.056 [2024-11-26 04:12:35.742525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.056 [2024-11-26 04:12:35.742577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.056 [2024-11-26 04:12:35.742587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.056 [2024-11-26 04:12:35.742596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:34.056 [2024-11-26 04:12:35.742605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.056 [2024-11-26 04:12:35.742624] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.056 [2024-11-26 04:12:35.742875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.056 [2024-11-26 04:12:35.742890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.056 [2024-11-26 04:12:35.742899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.056 [2024-11-26 04:12:35.742908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:16:34.056 [2024-11-26 04:12:35.742914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.056 [2024-11-26 04:12:35.743945] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:34.056 [2024-11-26 04:12:35.746057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.056 [2024-11-26 04:12:35.746198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:34.056 [2024-11-26 04:12:35.746218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:16:34.056 [2024-11-26 04:12:35.746226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.056 [2024-11-26 04:12:35.746272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.056 [2024-11-26 04:12:35.746287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:34.056 [2024-11-26 04:12:35.746295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:34.056 [2024-11-26 04:12:35.746302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.056 [2024-11-26 04:12:35.750905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.750939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.057 [2024-11-26 04:12:35.750949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:16:34.057 [2024-11-26 04:12:35.750959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.751027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.751036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.057 [2024-11-26 04:12:35.751046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:34.057 [2024-11-26 04:12:35.751054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.751093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.751102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.057 [2024-11-26 04:12:35.751114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:34.057 [2024-11-26 04:12:35.751121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.751143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.057 [2024-11-26 04:12:35.752398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.752428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.057 [2024-11-26 04:12:35.752437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:16:34.057 [2024-11-26 04:12:35.752444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.752472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.752480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.057 [2024-11-26 04:12:35.752490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:34.057 [2024-11-26 04:12:35.752515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.752535] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:34.057 [2024-11-26 04:12:35.752552] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:34.057 [2024-11-26 04:12:35.752587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:34.057 [2024-11-26 04:12:35.752604] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:34.057 [2024-11-26 04:12:35.752677] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:34.057 [2024-11-26 04:12:35.752689] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.057 [2024-11-26 04:12:35.752700] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:34.057 [2024-11-26 04:12:35.752710] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.057 [2024-11-26 04:12:35.752718] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.057 [2024-11-26 04:12:35.752727] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:34.057 [2024-11-26 04:12:35.752736] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.057 [2024-11-26 04:12:35.752744] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:34.057 [2024-11-26 04:12:35.752751] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:34.057 [2024-11-26 04:12:35.752758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.752765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.057 [2024-11-26 04:12:35.752772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:16:34.057 [2024-11-26 04:12:35.752781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.752838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.057 [2024-11-26 04:12:35.752846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.057 [2024-11-26 04:12:35.752856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:34.057 [2024-11-26 04:12:35.752863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.057 [2024-11-26 04:12:35.752934] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.057 [2024-11-26 04:12:35.752943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.057 [2024-11-26 04:12:35.752951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.057 [2024-11-26 04:12:35.752963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.057 [2024-11-26 04:12:35.752972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.057 [2024-11-26 04:12:35.752978] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.057 [2024-11-26 04:12:35.752985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:34.057 [2024-11-26 04:12:35.752992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.057 [2024-11-26 04:12:35.752999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.057 [2024-11-26 04:12:35.753012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.057 [2024-11-26 04:12:35.753018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:34.057 [2024-11-26 04:12:35.753024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.057 [2024-11-26 04:12:35.753038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.057 [2024-11-26 04:12:35.753045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:34.057 [2024-11-26 04:12:35.753052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.057 [2024-11-26 04:12:35.753067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:34.057 [2024-11-26 04:12:35.753074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:34.057 [2024-11-26 04:12:35.753091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:34.057 [2024-11-26 04:12:35.753098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.057 [2024-11-26 04:12:35.753113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.057 [2024-11-26 04:12:35.753134] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.057 [2024-11-26 04:12:35.753156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.057 [2024-11-26 04:12:35.753177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.057 [2024-11-26 04:12:35.753202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.057 [2024-11-26 04:12:35.753216] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.057 [2024-11-26 04:12:35.753223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:34.057 [2024-11-26 04:12:35.753230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.057 [2024-11-26 04:12:35.753238] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.057 [2024-11-26 04:12:35.753249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.057 [2024-11-26 04:12:35.753256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.057 [2024-11-26 04:12:35.753272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.057 [2024-11-26 04:12:35.753280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.057 [2024-11-26 04:12:35.753288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.057 [2024-11-26 04:12:35.753296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.057 [2024-11-26 04:12:35.753303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.057 [2024-11-26 04:12:35.753310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.057 [2024-11-26 04:12:35.753319] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.057 [2024-11-26 04:12:35.753333] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.057 [2024-11-26 04:12:35.753342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:34.057 [2024-11-26 04:12:35.753350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:34.057 [2024-11-26 04:12:35.753358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:34.057 [2024-11-26 04:12:35.753366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:34.057 [2024-11-26 04:12:35.753374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:34.057 [2024-11-26 04:12:35.753382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:34.057 [2024-11-26 04:12:35.753389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:34.057 [2024-11-26 04:12:35.753397] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:34.057 [2024-11-26 04:12:35.753405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:34.058 [2024-11-26 04:12:35.753413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:34.058 [2024-11-26 04:12:35.753421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:34.058 [2024-11-26 04:12:35.753429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:34.058 [2024-11-26 04:12:35.753436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:34.058 [2024-11-26 04:12:35.753443] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.058 [2024-11-26 04:12:35.753450] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.058 [2024-11-26 04:12:35.753462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.058 [2024-11-26 04:12:35.753470] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.058 [2024-11-26 04:12:35.753477] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.058 [2024-11-26 04:12:35.753485] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.058 [2024-11-26 04:12:35.753492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.753515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.058 [2024-11-26 04:12:35.753523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:16:34.058 [2024-11-26 04:12:35.753532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.759361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.759391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.058 [2024-11-26 04:12:35.759403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.796 ms 00:16:34.058 [2024-11-26 04:12:35.759410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.759490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.759498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:34.058 [2024-11-26 04:12:35.759523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:34.058 [2024-11-26 04:12:35.759529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.776703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.776759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.058 [2024-11-26 04:12:35.776783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.128 ms 00:16:34.058 [2024-11-26 04:12:35.776795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.776856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.776870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.058 [2024-11-26 04:12:35.776883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.058 [2024-11-26 04:12:35.776901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.777311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.777333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.058 [2024-11-26 04:12:35.777347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:16:34.058 [2024-11-26 04:12:35.777359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.777577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.777611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.058 [2024-11-26 04:12:35.777626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:16:34.058 [2024-11-26 04:12:35.777638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.784083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.784123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.058 [2024-11-26 04:12:35.784137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.413 ms 00:16:34.058 [2024-11-26 04:12:35.784148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.786670] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:34.058 [2024-11-26 04:12:35.786706] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:34.058 [2024-11-26 04:12:35.786715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.786722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:34.058 [2024-11-26 04:12:35.786731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.459 ms 00:16:34.058 [2024-11-26 04:12:35.786738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.800988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.801135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:34.058 [2024-11-26 04:12:35.801151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.216 ms 00:16:34.058 [2024-11-26 04:12:35.801159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.802772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.802803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:34.058 [2024-11-26 04:12:35.802812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:16:34.058 [2024-11-26 04:12:35.802818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.804193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.804302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:34.058 [2024-11-26 04:12:35.804315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:16:34.058 [2024-11-26 04:12:35.804326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.058 [2024-11-26 04:12:35.804518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.058 [2024-11-26 04:12:35.804529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:34.058 [2024-11-26 04:12:35.804537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:34.058 [2024-11-26 04:12:35.804546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.821424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.821469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:34.317 [2024-11-26 04:12:35.821480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.863 ms 00:16:34.317 [2024-11-26 04:12:35.821488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.828842] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:34.317 [2024-11-26 04:12:35.831221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.831343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:34.317 [2024-11-26 04:12:35.831358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.660 ms 00:16:34.317 [2024-11-26 04:12:35.831366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.831436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.831446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:34.317 [2024-11-26 04:12:35.831454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:34.317 [2024-11-26 04:12:35.831462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.831532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.831549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:34.317 [2024-11-26 04:12:35.831557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:34.317 [2024-11-26 04:12:35.831563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.832705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.832731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:34.317 [2024-11-26 04:12:35.832745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:16:34.317 [2024-11-26 04:12:35.832752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.832791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.832799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:34.317 [2024-11-26 04:12:35.832812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.317 [2024-11-26 04:12:35.832819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.832850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:34.317 [2024-11-26 04:12:35.832859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.832866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:34.317 [2024-11-26 04:12:35.832874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:34.317 [2024-11-26 04:12:35.832883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.836036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.836068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.317 [2024-11-26 04:12:35.836077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.137 ms 00:16:34.317 [2024-11-26 04:12:35.836089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.836150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.317 [2024-11-26 04:12:35.836160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.317 [2024-11-26 04:12:35.836171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:34.317 [2024-11-26 04:12:35.836178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.317 [2024-11-26 04:12:35.837490] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.683 ms, result 0 00:16:35.251  [2024-11-26T04:12:37.953Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-26T04:12:38.901Z] Copying: 94/1024 [MB] (49 MBps) [2024-11-26T04:12:39.892Z] Copying: 143/1024 [MB] (49 MBps) [2024-11-26T04:12:41.266Z] Copying: 189/1024 [MB] (46 MBps) [2024-11-26T04:12:42.200Z] Copying: 236/1024 [MB] (46 MBps) [2024-11-26T04:12:43.135Z] Copying: 282/1024 [MB] (45 MBps) [2024-11-26T04:12:44.068Z] Copying: 333/1024 [MB] (51 MBps) [2024-11-26T04:12:45.001Z] Copying: 380/1024 [MB] (46 MBps) [2024-11-26T04:12:45.935Z] Copying: 425/1024 [MB] (45 MBps) [2024-11-26T04:12:46.870Z] Copying: 472/1024 [MB] (46 MBps) [2024-11-26T04:12:48.244Z] Copying: 518/1024 [MB] (46 MBps) [2024-11-26T04:12:49.177Z] Copying: 571/1024 [MB] (53 MBps) [2024-11-26T04:12:50.109Z] Copying: 617/1024 [MB] (46 MBps) [2024-11-26T04:12:51.043Z] Copying: 663/1024 [MB] (45 MBps) [2024-11-26T04:12:51.978Z] Copying: 709/1024 [MB] (46 MBps) [2024-11-26T04:12:52.912Z] Copying: 755/1024 [MB] (46 MBps) [2024-11-26T04:12:54.287Z] Copying: 801/1024 [MB] (46 MBps) [2024-11-26T04:12:55.221Z] Copying: 847/1024 [MB] (45 MBps) [2024-11-26T04:12:56.156Z] Copying: 892/1024 [MB] (45 MBps) [2024-11-26T04:12:57.092Z] Copying: 939/1024 [MB] (46 MBps) [2024-11-26T04:12:58.026Z] Copying: 985/1024 [MB] (46 MBps) [2024-11-26T04:12:58.962Z] Copying: 1023/1024 [MB] (38 MBps) [2024-11-26T04:12:58.962Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-26 04:12:58.712885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.712934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.194 [2024-11-26 04:12:58.712954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.194 [2024-11-26 04:12:58.712963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.714538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.194 [2024-11-26 04:12:58.716252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.716286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.194 [2024-11-26 04:12:58.716302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:16:57.194 [2024-11-26 04:12:58.716309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.728363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.728400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.194 [2024-11-26 04:12:58.728409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.201 ms 00:16:57.194 [2024-11-26 04:12:58.728417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.746499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.746536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.194 [2024-11-26 04:12:58.746546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.067 ms 00:16:57.194 [2024-11-26 04:12:58.746554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.752636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.752664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:57.194 [2024-11-26 04:12:58.752678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:16:57.194 [2024-11-26 04:12:58.752686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.753839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.753869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.194 [2024-11-26 04:12:58.753878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:16:57.194 [2024-11-26 04:12:58.753885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.756944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.756976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.194 [2024-11-26 04:12:58.756984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:16:57.194 [2024-11-26 04:12:58.757001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.802666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.802709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.194 [2024-11-26 04:12:58.802719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.632 ms 00:16:57.194 [2024-11-26 04:12:58.802727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.804414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.804445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:57.194 [2024-11-26 04:12:58.804455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:16:57.194 [2024-11-26 04:12:58.804461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.805521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.805549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:57.194 [2024-11-26 04:12:58.805557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:16:57.194 [2024-11-26 04:12:58.805564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.806445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.806475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.194 [2024-11-26 04:12:58.806484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:16:57.194 [2024-11-26 04:12:58.806490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.807308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.194 [2024-11-26 04:12:58.807338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.194 [2024-11-26 04:12:58.807346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:16:57.194 [2024-11-26 04:12:58.807353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-11-26 04:12:58.807378] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.194 [2024-11-26 04:12:58.807398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:16:57.194 [2024-11-26 04:12:58.807408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.194 [2024-11-26 04:12:58.807891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.807996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.195 [2024-11-26 04:12:58.808154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.195 [2024-11-26 04:12:58.808162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:16:57.195 [2024-11-26 04:12:58.808169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:16:57.195 [2024-11-26 04:12:58.808176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:16:57.195 [2024-11-26 04:12:58.808183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:16:57.195 [2024-11-26 04:12:58.808191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:16:57.195 [2024-11-26 04:12:58.808201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.195 [2024-11-26 04:12:58.808209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.195 [2024-11-26 04:12:58.808216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.195 [2024-11-26 04:12:58.808222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.195 [2024-11-26 04:12:58.808228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.195 [2024-11-26 04:12:58.808235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.195 [2024-11-26 04:12:58.808243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.195 [2024-11-26 04:12:58.808250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:16:57.195 [2024-11-26 04:12:58.808257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.809629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.195 [2024-11-26 04:12:58.809660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.195 [2024-11-26 04:12:58.809672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:16:57.195 [2024-11-26 04:12:58.809680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.809735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.195 [2024-11-26 04:12:58.809743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.195 [2024-11-26 04:12:58.809751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:57.195 [2024-11-26 04:12:58.809761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.814727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.814759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.195 [2024-11-26 04:12:58.814768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.814776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.814824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.814832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.195 [2024-11-26 04:12:58.814839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.814846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.814886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.814901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.195 [2024-11-26 04:12:58.814908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.814915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.814929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.814941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.195 [2024-11-26 04:12:58.814948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.814955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.823183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.823229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.195 [2024-11-26 04:12:58.823238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.823245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.826840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.826872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.195 [2024-11-26 04:12:58.826881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.826889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.826941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.826950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.195 [2024-11-26 04:12:58.826957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.826968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.826993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.827000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.195 [2024-11-26 04:12:58.827011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.827018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.827080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.827089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.195 [2024-11-26 04:12:58.827097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.827107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.827136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.827144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.195 [2024-11-26 04:12:58.827152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.827162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.827195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.827203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.195 [2024-11-26 04:12:58.827210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.827217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.827260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.195 [2024-11-26 04:12:58.827269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.195 [2024-11-26 04:12:58.827276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.195 [2024-11-26 04:12:58.827283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.195 [2024-11-26 04:12:58.827397] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 115.088 ms, result 0 00:16:58.569 00:16:58.569 00:16:58.569 04:13:00 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:16:58.569 [2024-11-26 04:13:00.091974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:58.569 [2024-11-26 04:13:00.092093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84672 ] 00:16:58.569 [2024-11-26 04:13:00.238964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.569 [2024-11-26 04:13:00.269220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.830 [2024-11-26 04:13:00.352818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.830 [2024-11-26 04:13:00.352885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.830 [2024-11-26 04:13:00.498046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.498107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.830 [2024-11-26 04:13:00.498121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.830 [2024-11-26 04:13:00.498134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.498181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.498194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.830 [2024-11-26 04:13:00.498202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:58.830 [2024-11-26 04:13:00.498211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.498234] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.830 [2024-11-26 04:13:00.498583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.830 [2024-11-26 04:13:00.498604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.498618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.830 [2024-11-26 04:13:00.498626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:16:58.830 [2024-11-26 04:13:00.498634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.499686] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.830 [2024-11-26 04:13:00.501896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.501930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.830 [2024-11-26 04:13:00.501948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:16:58.830 [2024-11-26 04:13:00.501956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.502006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.502015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.830 [2024-11-26 04:13:00.502023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:58.830 [2024-11-26 04:13:00.502030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.506702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.506734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.830 [2024-11-26 04:13:00.506744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.616 ms 00:16:58.830 [2024-11-26 04:13:00.506754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.506818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.506827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.830 [2024-11-26 04:13:00.506835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:58.830 [2024-11-26 04:13:00.506842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.506884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.506893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.830 [2024-11-26 04:13:00.506903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.830 [2024-11-26 04:13:00.506915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.506939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.830 [2024-11-26 04:13:00.508207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.508236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.830 [2024-11-26 04:13:00.508245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:16:58.830 [2024-11-26 04:13:00.508256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.508285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.830 [2024-11-26 04:13:00.508293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.830 [2024-11-26 04:13:00.508303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:58.830 [2024-11-26 04:13:00.508310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.830 [2024-11-26 04:13:00.508329] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.830 [2024-11-26 04:13:00.508346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:58.830 [2024-11-26 04:13:00.508378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.830 [2024-11-26 04:13:00.508392] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:58.831 [2024-11-26 04:13:00.508464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:58.831 [2024-11-26 04:13:00.508475] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.831 [2024-11-26 04:13:00.508490] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:58.831 [2024-11-26 04:13:00.508516] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508532] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508539] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:58.831 [2024-11-26 04:13:00.508547] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.831 [2024-11-26 04:13:00.508557] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:58.831 [2024-11-26 04:13:00.508566] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:58.831 [2024-11-26 04:13:00.508578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.831 [2024-11-26 04:13:00.508585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.831 [2024-11-26 04:13:00.508593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:58.831 [2024-11-26 04:13:00.508602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.831 [2024-11-26 04:13:00.508661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.831 [2024-11-26 04:13:00.508669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.831 [2024-11-26 04:13:00.508676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:58.831 [2024-11-26 04:13:00.508686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.831 [2024-11-26 04:13:00.508755] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.831 [2024-11-26 04:13:00.508765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.831 [2024-11-26 04:13:00.508780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.831 [2024-11-26 04:13:00.508804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.831 [2024-11-26 04:13:00.508824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.831 [2024-11-26 04:13:00.508837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.831 [2024-11-26 04:13:00.508843] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:58.831 [2024-11-26 04:13:00.508850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.831 [2024-11-26 04:13:00.508863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.831 [2024-11-26 04:13:00.508870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:58.831 [2024-11-26 04:13:00.508877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.831 [2024-11-26 04:13:00.508892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:58.831 [2024-11-26 04:13:00.508901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:58.831 [2024-11-26 04:13:00.508916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:58.831 [2024-11-26 04:13:00.508923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.831 [2024-11-26 04:13:00.508938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.831 [2024-11-26 04:13:00.508959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.831 [2024-11-26 04:13:00.508980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:58.831 [2024-11-26 04:13:00.508987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.831 [2024-11-26 04:13:00.508994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.831 [2024-11-26 04:13:00.509001] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:58.831 [2024-11-26 04:13:00.509008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.831 [2024-11-26 04:13:00.509018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.831 [2024-11-26 04:13:00.509026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:58.831 [2024-11-26 04:13:00.509033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.831 [2024-11-26 04:13:00.509040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.831 [2024-11-26 04:13:00.509047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:58.831 [2024-11-26 04:13:00.509054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.831 [2024-11-26 04:13:00.509061] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.831 [2024-11-26 04:13:00.509068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.831 [2024-11-26 04:13:00.509076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.831 [2024-11-26 04:13:00.509084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.831 [2024-11-26 04:13:00.509093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.831 [2024-11-26 04:13:00.509101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.831 [2024-11-26 04:13:00.509108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.831 [2024-11-26 04:13:00.509116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.831 [2024-11-26 04:13:00.509123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.831 [2024-11-26 04:13:00.509131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.831 [2024-11-26 04:13:00.509140] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.831 [2024-11-26 04:13:00.509150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.831 [2024-11-26 04:13:00.509160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:58.831 [2024-11-26 04:13:00.509168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:58.831 [2024-11-26 04:13:00.509176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:58.831 [2024-11-26 04:13:00.509184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:58.831 [2024-11-26 04:13:00.509191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:58.831 [2024-11-26 04:13:00.509199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:58.831 [2024-11-26 04:13:00.509207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:58.831 [2024-11-26 04:13:00.509215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:58.831 [2024-11-26 04:13:00.509224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:58.831 [2024-11-26 04:13:00.509232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:58.831 [2024-11-26 04:13:00.509239] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:58.831 [2024-11-26 04:13:00.509246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:58.831 [2024-11-26 04:13:00.509254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:58.831 [2024-11-26 04:13:00.509260] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.831 [2024-11-26 04:13:00.509270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.831 [2024-11-26 04:13:00.509280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.831 [2024-11-26 04:13:00.509287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.831 [2024-11-26 04:13:00.509294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.831 [2024-11-26 04:13:00.509302] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.831 [2024-11-26 04:13:00.509309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.831 [2024-11-26 04:13:00.509315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.831 [2024-11-26 04:13:00.509322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:16:58.831 [2024-11-26 04:13:00.509332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.831 [2024-11-26 04:13:00.515250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.831 [2024-11-26 04:13:00.515283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.831 [2024-11-26 04:13:00.515293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.883 ms 00:16:58.831 [2024-11-26 04:13:00.515301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.831 [2024-11-26 04:13:00.515389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.831 [2024-11-26 04:13:00.515400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.831 [2024-11-26 04:13:00.515410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:58.832 [2024-11-26 04:13:00.515417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.532589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.532647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.832 [2024-11-26 04:13:00.532666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.127 ms 00:16:58.832 [2024-11-26 04:13:00.532678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.532738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.532753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.832 [2024-11-26 04:13:00.532770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.832 [2024-11-26 04:13:00.532791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.533209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.533244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.832 [2024-11-26 04:13:00.533259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:16:58.832 [2024-11-26 04:13:00.533273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.533454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.533483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.832 [2024-11-26 04:13:00.533497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:58.832 [2024-11-26 04:13:00.533528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.539830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.539878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.832 [2024-11-26 04:13:00.539892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.270 ms 00:16:58.832 [2024-11-26 04:13:00.539904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.542587] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:16:58.832 [2024-11-26 04:13:00.542624] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.832 [2024-11-26 04:13:00.542634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.542641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.832 [2024-11-26 04:13:00.542649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:16:58.832 [2024-11-26 04:13:00.542656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.556961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.557000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.832 [2024-11-26 04:13:00.557019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.213 ms 00:16:58.832 [2024-11-26 04:13:00.557026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.558822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.558853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.832 [2024-11-26 04:13:00.558863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:16:58.832 [2024-11-26 04:13:00.558869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.560257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.560288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.832 [2024-11-26 04:13:00.560301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:16:58.832 [2024-11-26 04:13:00.560310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.560593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.560613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.832 [2024-11-26 04:13:00.560622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:16:58.832 [2024-11-26 04:13:00.560632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.577879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.577932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.832 [2024-11-26 04:13:00.577943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.227 ms 00:16:58.832 [2024-11-26 04:13:00.577951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.585413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:58.832 [2024-11-26 04:13:00.588040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.588072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.832 [2024-11-26 04:13:00.588085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.037 ms 00:16:58.832 [2024-11-26 04:13:00.588092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.588166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.588176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.832 [2024-11-26 04:13:00.588184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.832 [2024-11-26 04:13:00.588192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.589302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.589338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.832 [2024-11-26 04:13:00.589347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:58.832 [2024-11-26 04:13:00.589355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.590546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.590575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.832 [2024-11-26 04:13:00.590583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:16:58.832 [2024-11-26 04:13:00.590593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.590623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.590631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.832 [2024-11-26 04:13:00.590642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.832 [2024-11-26 04:13:00.590649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.832 [2024-11-26 04:13:00.590681] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.832 [2024-11-26 04:13:00.590690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.832 [2024-11-26 04:13:00.590699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.832 [2024-11-26 04:13:00.590706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:58.832 [2024-11-26 04:13:00.590717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.091 [2024-11-26 04:13:00.593860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.091 [2024-11-26 04:13:00.593894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.091 [2024-11-26 04:13:00.593905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:16:59.091 [2024-11-26 04:13:00.593920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.091 [2024-11-26 04:13:00.593984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.091 [2024-11-26 04:13:00.593994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.091 [2024-11-26 04:13:00.594006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:59.091 [2024-11-26 04:13:00.594014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.091 [2024-11-26 04:13:00.600590] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.413 ms, result 0 00:17:00.025  [2024-11-26T04:13:03.186Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-26T04:13:04.119Z] Copying: 96/1024 [MB] (50 MBps) [2024-11-26T04:13:05.073Z] Copying: 144/1024 [MB] (47 MBps) [2024-11-26T04:13:06.040Z] Copying: 194/1024 [MB] (50 MBps) [2024-11-26T04:13:06.975Z] Copying: 241/1024 [MB] (47 MBps) [2024-11-26T04:13:07.908Z] Copying: 290/1024 [MB] (48 MBps) [2024-11-26T04:13:08.843Z] Copying: 340/1024 [MB] (49 MBps) [2024-11-26T04:13:09.783Z] Copying: 388/1024 [MB] (48 MBps) [2024-11-26T04:13:11.158Z] Copying: 436/1024 [MB] (48 MBps) [2024-11-26T04:13:12.093Z] Copying: 482/1024 [MB] (45 MBps) [2024-11-26T04:13:13.029Z] Copying: 530/1024 [MB] (47 MBps) [2024-11-26T04:13:13.964Z] Copying: 579/1024 [MB] (48 MBps) [2024-11-26T04:13:14.898Z] Copying: 626/1024 [MB] (47 MBps) [2024-11-26T04:13:15.832Z] Copying: 674/1024 [MB] (48 MBps) [2024-11-26T04:13:17.205Z] Copying: 721/1024 [MB] (46 MBps) [2024-11-26T04:13:18.138Z] Copying: 769/1024 [MB] (48 MBps) [2024-11-26T04:13:19.072Z] Copying: 820/1024 [MB] (50 MBps) [2024-11-26T04:13:20.004Z] Copying: 868/1024 [MB] (48 MBps) [2024-11-26T04:13:20.937Z] Copying: 916/1024 [MB] (47 MBps) [2024-11-26T04:13:21.870Z] Copying: 963/1024 [MB] (47 MBps) [2024-11-26T04:13:22.128Z] Copying: 1015/1024 [MB] (52 MBps) [2024-11-26T04:13:23.064Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-26 04:13:22.704040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.296 [2024-11-26 04:13:22.704107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.296 [2024-11-26 04:13:22.704121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.296 [2024-11-26 04:13:22.704128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.704149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.297 [2024-11-26 04:13:22.704625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.704650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.297 [2024-11-26 04:13:22.704659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:17:21.297 [2024-11-26 04:13:22.704667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.704889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.704909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.297 [2024-11-26 04:13:22.704918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:17:21.297 [2024-11-26 04:13:22.704926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.709156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.709187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.297 [2024-11-26 04:13:22.709197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.216 ms 00:17:21.297 [2024-11-26 04:13:22.709204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.715333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.715363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:21.297 [2024-11-26 04:13:22.715378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:17:21.297 [2024-11-26 04:13:22.715385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.716590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.716621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.297 [2024-11-26 04:13:22.716629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:17:21.297 [2024-11-26 04:13:22.716636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.720138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.720172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.297 [2024-11-26 04:13:22.720181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:17:21.297 [2024-11-26 04:13:22.720189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.775452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.775526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.297 [2024-11-26 04:13:22.775540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.238 ms 00:17:21.297 [2024-11-26 04:13:22.775548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.777709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.777754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:21.297 [2024-11-26 04:13:22.777766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:17:21.297 [2024-11-26 04:13:22.777775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.778805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.778837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:21.297 [2024-11-26 04:13:22.778846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:17:21.297 [2024-11-26 04:13:22.778853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.779747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.779790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.297 [2024-11-26 04:13:22.779799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:17:21.297 [2024-11-26 04:13:22.779806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.780585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.297 [2024-11-26 04:13:22.780615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.297 [2024-11-26 04:13:22.780624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:17:21.297 [2024-11-26 04:13:22.780631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.297 [2024-11-26 04:13:22.780658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.297 [2024-11-26 04:13:22.780672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:17:21.297 [2024-11-26 04:13:22.780681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.780999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.297 [2024-11-26 04:13:22.781042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.298 [2024-11-26 04:13:22.781402] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.298 [2024-11-26 04:13:22.781409] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67d4fab1-ad45-4e8d-83b0-1fc3f149cedb 00:17:21.298 [2024-11-26 04:13:22.781417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:17:21.298 [2024-11-26 04:13:22.781424] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:17:21.298 [2024-11-26 04:13:22.781431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:17:21.298 [2024-11-26 04:13:22.781439] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:17:21.298 [2024-11-26 04:13:22.781451] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.298 [2024-11-26 04:13:22.781458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.298 [2024-11-26 04:13:22.781465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.298 [2024-11-26 04:13:22.781472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.298 [2024-11-26 04:13:22.781478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.298 [2024-11-26 04:13:22.781485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.298 [2024-11-26 04:13:22.781492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.298 [2024-11-26 04:13:22.781510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:17:21.298 [2024-11-26 04:13:22.781518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.782919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.298 [2024-11-26 04:13:22.782952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.298 [2024-11-26 04:13:22.782964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:17:21.298 [2024-11-26 04:13:22.782974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.783025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.298 [2024-11-26 04:13:22.783038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.298 [2024-11-26 04:13:22.783046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:21.298 [2024-11-26 04:13:22.783053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.788046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.788082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.298 [2024-11-26 04:13:22.788091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.788098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.788146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.788154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.298 [2024-11-26 04:13:22.788167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.788177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.788236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.788248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.298 [2024-11-26 04:13:22.788261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.788268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.788282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.788289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.298 [2024-11-26 04:13:22.788300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.788307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.796798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.796847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.298 [2024-11-26 04:13:22.796858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.796866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.800593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.800626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.298 [2024-11-26 04:13:22.800635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.800642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.800678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.298 [2024-11-26 04:13:22.800686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.298 [2024-11-26 04:13:22.800694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.298 [2024-11-26 04:13:22.800705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.298 [2024-11-26 04:13:22.800743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.299 [2024-11-26 04:13:22.800751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.299 [2024-11-26 04:13:22.800763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.299 [2024-11-26 04:13:22.800773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.299 [2024-11-26 04:13:22.800831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.299 [2024-11-26 04:13:22.800839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.299 [2024-11-26 04:13:22.800847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.299 [2024-11-26 04:13:22.800854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.299 [2024-11-26 04:13:22.800881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.299 [2024-11-26 04:13:22.800890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.299 [2024-11-26 04:13:22.800897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.299 [2024-11-26 04:13:22.800904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.299 [2024-11-26 04:13:22.800936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.299 [2024-11-26 04:13:22.800943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.299 [2024-11-26 04:13:22.800951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.299 [2024-11-26 04:13:22.800958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.299 [2024-11-26 04:13:22.801005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.299 [2024-11-26 04:13:22.801015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.299 [2024-11-26 04:13:22.801022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.299 [2024-11-26 04:13:22.801030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.299 [2024-11-26 04:13:22.801136] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 97.071 ms, result 0 00:17:21.299 00:17:21.299 00:17:21.299 04:13:22 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:23.201 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:17:23.201 04:13:24 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:17:23.201 04:13:24 -- ftl/restore.sh@85 -- # restore_kill 00:17:23.201 04:13:24 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:23.459 04:13:25 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:23.459 04:13:25 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:23.459 04:13:25 -- ftl/restore.sh@32 -- # killprocess 83654 00:17:23.459 04:13:25 -- common/autotest_common.sh@936 -- # '[' -z 83654 ']' 00:17:23.459 04:13:25 -- common/autotest_common.sh@940 -- # kill -0 83654 00:17:23.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83654) - No such process 00:17:23.459 Process with pid 83654 is not found 00:17:23.459 04:13:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83654 is not found' 00:17:23.459 04:13:25 -- ftl/restore.sh@33 -- # remove_shm 00:17:23.459 Remove shared memory files 00:17:23.459 04:13:25 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:23.459 04:13:25 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:23.459 04:13:25 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:23.459 04:13:25 -- ftl/common.sh@207 -- # rm -f rm -f 00:17:23.459 04:13:25 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:23.459 04:13:25 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:23.459 00:17:23.459 real 1m54.954s 00:17:23.459 user 1m44.652s 00:17:23.459 sys 0m11.422s 00:17:23.459 04:13:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:23.459 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:17:23.459 ************************************ 00:17:23.459 END TEST ftl_restore 00:17:23.459 ************************************ 00:17:23.460 04:13:25 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:17:23.460 04:13:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:23.460 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:17:23.460 ************************************ 00:17:23.460 START TEST ftl_dirty_shutdown 00:17:23.460 ************************************ 00:17:23.460 04:13:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:17:23.460 * Looking for test storage... 00:17:23.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.460 04:13:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:23.460 04:13:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:23.460 04:13:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:23.460 04:13:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:23.460 04:13:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:23.460 04:13:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:23.460 04:13:25 -- scripts/common.sh@335 -- # IFS=.-: 00:17:23.460 04:13:25 -- scripts/common.sh@335 -- # read -ra ver1 00:17:23.460 04:13:25 -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.460 04:13:25 -- scripts/common.sh@336 -- # read -ra ver2 00:17:23.460 04:13:25 -- scripts/common.sh@337 -- # local 'op=<' 00:17:23.460 04:13:25 -- scripts/common.sh@339 -- # ver1_l=2 00:17:23.460 04:13:25 -- scripts/common.sh@340 -- # ver2_l=1 00:17:23.460 04:13:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:23.460 04:13:25 -- scripts/common.sh@343 -- # case "$op" in 00:17:23.460 04:13:25 -- scripts/common.sh@344 -- # : 1 00:17:23.460 04:13:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:23.460 04:13:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.460 04:13:25 -- scripts/common.sh@364 -- # decimal 1 00:17:23.460 04:13:25 -- scripts/common.sh@352 -- # local d=1 00:17:23.460 04:13:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.460 04:13:25 -- scripts/common.sh@354 -- # echo 1 00:17:23.460 04:13:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:23.460 04:13:25 -- scripts/common.sh@365 -- # decimal 2 00:17:23.460 04:13:25 -- scripts/common.sh@352 -- # local d=2 00:17:23.460 04:13:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.460 04:13:25 -- scripts/common.sh@354 -- # echo 2 00:17:23.460 04:13:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:23.460 04:13:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:23.460 04:13:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:23.460 04:13:25 -- scripts/common.sh@367 -- # return 0 00:17:23.460 04:13:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.460 --rc genhtml_branch_coverage=1 00:17:23.460 --rc genhtml_function_coverage=1 00:17:23.460 --rc genhtml_legend=1 00:17:23.460 --rc geninfo_all_blocks=1 00:17:23.460 --rc geninfo_unexecuted_blocks=1 00:17:23.460 00:17:23.460 ' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.460 --rc genhtml_branch_coverage=1 00:17:23.460 --rc genhtml_function_coverage=1 00:17:23.460 --rc genhtml_legend=1 00:17:23.460 --rc geninfo_all_blocks=1 00:17:23.460 --rc geninfo_unexecuted_blocks=1 00:17:23.460 00:17:23.460 ' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.460 --rc genhtml_branch_coverage=1 00:17:23.460 --rc genhtml_function_coverage=1 00:17:23.460 --rc genhtml_legend=1 00:17:23.460 --rc geninfo_all_blocks=1 00:17:23.460 --rc geninfo_unexecuted_blocks=1 00:17:23.460 00:17:23.460 ' 00:17:23.460 04:13:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:23.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.460 --rc genhtml_branch_coverage=1 00:17:23.460 --rc genhtml_function_coverage=1 00:17:23.460 --rc genhtml_legend=1 00:17:23.460 --rc geninfo_all_blocks=1 00:17:23.460 --rc geninfo_unexecuted_blocks=1 00:17:23.460 00:17:23.460 ' 00:17:23.460 04:13:25 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:23.460 04:13:25 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:17:23.460 04:13:25 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.460 04:13:25 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:23.460 04:13:25 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:23.460 04:13:25 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:23.460 04:13:25 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.460 04:13:25 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:23.460 04:13:25 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:23.460 04:13:25 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.460 04:13:25 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.460 04:13:25 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:23.460 04:13:25 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:23.719 04:13:25 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:23.719 04:13:25 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:23.719 04:13:25 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:23.719 04:13:25 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:23.719 04:13:25 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.719 04:13:25 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.719 04:13:25 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:23.719 04:13:25 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:23.719 04:13:25 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:23.719 04:13:25 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:23.719 04:13:25 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:23.719 04:13:25 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:23.719 04:13:25 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:23.719 04:13:25 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:23.719 04:13:25 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:23.719 04:13:25 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@45 -- # svcpid=85002 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 85002 00:17:23.719 04:13:25 -- common/autotest_common.sh@829 -- # '[' -z 85002 ']' 00:17:23.719 04:13:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.719 04:13:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.719 04:13:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.719 04:13:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.719 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:17:23.719 04:13:25 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:23.719 [2024-11-26 04:13:25.282404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.719 [2024-11-26 04:13:25.282514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85002 ] 00:17:23.719 [2024-11-26 04:13:25.416788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.719 [2024-11-26 04:13:25.449006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.719 [2024-11-26 04:13:25.449193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.685 04:13:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.685 04:13:26 -- common/autotest_common.sh@862 -- # return 0 00:17:24.685 04:13:26 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:24.685 04:13:26 -- ftl/common.sh@54 -- # local name=nvme0 00:17:24.685 04:13:26 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:24.685 04:13:26 -- ftl/common.sh@56 -- # local size=103424 00:17:24.685 04:13:26 -- ftl/common.sh@59 -- # local base_bdev 00:17:24.685 04:13:26 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:24.685 04:13:26 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:24.685 04:13:26 -- ftl/common.sh@62 -- # local base_size 00:17:24.685 04:13:26 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:24.685 04:13:26 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:24.685 04:13:26 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:24.685 04:13:26 -- common/autotest_common.sh@1369 -- # local bs 00:17:24.685 04:13:26 -- common/autotest_common.sh@1370 -- # local nb 00:17:24.685 04:13:26 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:24.945 04:13:26 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:24.945 { 00:17:24.945 "name": "nvme0n1", 00:17:24.945 "aliases": [ 00:17:24.945 "a4803600-d6e8-4e7b-8306-3e40968c2a0a" 00:17:24.945 ], 00:17:24.945 "product_name": "NVMe disk", 00:17:24.945 "block_size": 4096, 00:17:24.945 "num_blocks": 1310720, 00:17:24.945 "uuid": "a4803600-d6e8-4e7b-8306-3e40968c2a0a", 00:17:24.945 "assigned_rate_limits": { 00:17:24.945 "rw_ios_per_sec": 0, 00:17:24.945 "rw_mbytes_per_sec": 0, 00:17:24.945 "r_mbytes_per_sec": 0, 00:17:24.945 "w_mbytes_per_sec": 0 00:17:24.945 }, 00:17:24.945 "claimed": true, 00:17:24.945 "claim_type": "read_many_write_one", 00:17:24.945 "zoned": false, 00:17:24.945 "supported_io_types": { 00:17:24.945 "read": true, 00:17:24.945 "write": true, 00:17:24.945 "unmap": true, 00:17:24.945 "write_zeroes": true, 00:17:24.945 "flush": true, 00:17:24.945 "reset": true, 00:17:24.945 "compare": true, 00:17:24.945 "compare_and_write": false, 00:17:24.945 "abort": true, 00:17:24.945 "nvme_admin": true, 00:17:24.945 "nvme_io": true 00:17:24.945 }, 00:17:24.945 "driver_specific": { 00:17:24.945 "nvme": [ 00:17:24.945 { 00:17:24.945 "pci_address": "0000:00:07.0", 00:17:24.945 "trid": { 00:17:24.945 "trtype": "PCIe", 00:17:24.945 "traddr": "0000:00:07.0" 00:17:24.945 }, 00:17:24.945 "ctrlr_data": { 00:17:24.945 "cntlid": 0, 00:17:24.945 "vendor_id": "0x1b36", 00:17:24.945 "model_number": "QEMU NVMe Ctrl", 00:17:24.945 "serial_number": "12341", 00:17:24.945 "firmware_revision": "8.0.0", 00:17:24.945 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:24.945 "oacs": { 00:17:24.945 "security": 0, 00:17:24.945 "format": 1, 00:17:24.945 "firmware": 0, 00:17:24.945 "ns_manage": 1 00:17:24.945 }, 00:17:24.945 "multi_ctrlr": false, 00:17:24.945 "ana_reporting": false 00:17:24.945 }, 00:17:24.945 "vs": { 00:17:24.945 "nvme_version": "1.4" 00:17:24.945 }, 00:17:24.945 "ns_data": { 00:17:24.945 "id": 1, 00:17:24.945 "can_share": false 00:17:24.945 } 00:17:24.945 } 00:17:24.945 ], 00:17:24.945 "mp_policy": "active_passive" 00:17:24.945 } 00:17:24.945 } 00:17:24.945 ]' 00:17:24.945 04:13:26 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:24.945 04:13:26 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:24.945 04:13:26 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:24.945 04:13:26 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:24.945 04:13:26 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:24.945 04:13:26 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:24.945 04:13:26 -- ftl/common.sh@63 -- # base_size=5120 00:17:24.945 04:13:26 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:24.945 04:13:26 -- ftl/common.sh@67 -- # clear_lvols 00:17:24.945 04:13:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:24.945 04:13:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:25.204 04:13:26 -- ftl/common.sh@28 -- # stores=8d5588d7-d2b7-43cf-96b4-dd7923846e5e 00:17:25.204 04:13:26 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:25.204 04:13:26 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d5588d7-d2b7-43cf-96b4-dd7923846e5e 00:17:25.204 04:13:26 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:25.463 04:13:27 -- ftl/common.sh@68 -- # lvs=2c12d8aa-252f-45f6-bd4e-80b207b89121 00:17:25.463 04:13:27 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2c12d8aa-252f-45f6-bd4e-80b207b89121 00:17:25.722 04:13:27 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.722 04:13:27 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:17:25.722 04:13:27 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.722 04:13:27 -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.722 04:13:27 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:25.722 04:13:27 -- ftl/common.sh@37 -- # local base_bdev=c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.722 04:13:27 -- ftl/common.sh@38 -- # local cache_size= 00:17:25.722 04:13:27 -- ftl/common.sh@41 -- # get_bdev_size c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.722 04:13:27 -- common/autotest_common.sh@1367 -- # local bdev_name=c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.722 04:13:27 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:25.722 04:13:27 -- common/autotest_common.sh@1369 -- # local bs 00:17:25.722 04:13:27 -- common/autotest_common.sh@1370 -- # local nb 00:17:25.722 04:13:27 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:25.981 04:13:27 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:25.981 { 00:17:25.981 "name": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:25.981 "aliases": [ 00:17:25.981 "lvs/nvme0n1p0" 00:17:25.981 ], 00:17:25.981 "product_name": "Logical Volume", 00:17:25.981 "block_size": 4096, 00:17:25.981 "num_blocks": 26476544, 00:17:25.981 "uuid": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:25.981 "assigned_rate_limits": { 00:17:25.981 "rw_ios_per_sec": 0, 00:17:25.981 "rw_mbytes_per_sec": 0, 00:17:25.981 "r_mbytes_per_sec": 0, 00:17:25.981 "w_mbytes_per_sec": 0 00:17:25.981 }, 00:17:25.981 "claimed": false, 00:17:25.981 "zoned": false, 00:17:25.981 "supported_io_types": { 00:17:25.981 "read": true, 00:17:25.981 "write": true, 00:17:25.981 "unmap": true, 00:17:25.981 "write_zeroes": true, 00:17:25.981 "flush": false, 00:17:25.981 "reset": true, 00:17:25.981 "compare": false, 00:17:25.981 "compare_and_write": false, 00:17:25.981 "abort": false, 00:17:25.981 "nvme_admin": false, 00:17:25.981 "nvme_io": false 00:17:25.981 }, 00:17:25.981 "driver_specific": { 00:17:25.981 "lvol": { 00:17:25.981 "lvol_store_uuid": "2c12d8aa-252f-45f6-bd4e-80b207b89121", 00:17:25.981 "base_bdev": "nvme0n1", 00:17:25.981 "thin_provision": true, 00:17:25.981 "snapshot": false, 00:17:25.981 "clone": false, 00:17:25.981 "esnap_clone": false 00:17:25.981 } 00:17:25.981 } 00:17:25.981 } 00:17:25.981 ]' 00:17:25.981 04:13:27 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:25.981 04:13:27 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:25.981 04:13:27 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:25.981 04:13:27 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:25.981 04:13:27 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:25.981 04:13:27 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:25.981 04:13:27 -- ftl/common.sh@41 -- # local base_size=5171 00:17:25.981 04:13:27 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:25.981 04:13:27 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:26.241 04:13:27 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:26.241 04:13:27 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:26.241 04:13:27 -- ftl/common.sh@48 -- # get_bdev_size c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.241 04:13:27 -- common/autotest_common.sh@1367 -- # local bdev_name=c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.241 04:13:27 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:26.241 04:13:27 -- common/autotest_common.sh@1369 -- # local bs 00:17:26.241 04:13:27 -- common/autotest_common.sh@1370 -- # local nb 00:17:26.241 04:13:27 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.500 04:13:28 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:26.500 { 00:17:26.500 "name": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:26.500 "aliases": [ 00:17:26.500 "lvs/nvme0n1p0" 00:17:26.500 ], 00:17:26.500 "product_name": "Logical Volume", 00:17:26.500 "block_size": 4096, 00:17:26.500 "num_blocks": 26476544, 00:17:26.500 "uuid": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:26.500 "assigned_rate_limits": { 00:17:26.500 "rw_ios_per_sec": 0, 00:17:26.500 "rw_mbytes_per_sec": 0, 00:17:26.500 "r_mbytes_per_sec": 0, 00:17:26.500 "w_mbytes_per_sec": 0 00:17:26.500 }, 00:17:26.500 "claimed": false, 00:17:26.500 "zoned": false, 00:17:26.500 "supported_io_types": { 00:17:26.500 "read": true, 00:17:26.500 "write": true, 00:17:26.500 "unmap": true, 00:17:26.500 "write_zeroes": true, 00:17:26.500 "flush": false, 00:17:26.500 "reset": true, 00:17:26.500 "compare": false, 00:17:26.500 "compare_and_write": false, 00:17:26.500 "abort": false, 00:17:26.500 "nvme_admin": false, 00:17:26.500 "nvme_io": false 00:17:26.500 }, 00:17:26.500 "driver_specific": { 00:17:26.500 "lvol": { 00:17:26.500 "lvol_store_uuid": "2c12d8aa-252f-45f6-bd4e-80b207b89121", 00:17:26.500 "base_bdev": "nvme0n1", 00:17:26.500 "thin_provision": true, 00:17:26.500 "snapshot": false, 00:17:26.500 "clone": false, 00:17:26.500 "esnap_clone": false 00:17:26.500 } 00:17:26.500 } 00:17:26.500 } 00:17:26.500 ]' 00:17:26.500 04:13:28 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:26.500 04:13:28 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:26.500 04:13:28 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:26.500 04:13:28 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:26.500 04:13:28 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:26.500 04:13:28 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:26.500 04:13:28 -- ftl/common.sh@48 -- # cache_size=5171 00:17:26.500 04:13:28 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:26.759 04:13:28 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:17:26.759 04:13:28 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.759 04:13:28 -- common/autotest_common.sh@1367 -- # local bdev_name=c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.759 04:13:28 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:26.759 04:13:28 -- common/autotest_common.sh@1369 -- # local bs 00:17:26.759 04:13:28 -- common/autotest_common.sh@1370 -- # local nb 00:17:26.759 04:13:28 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c39160ad-c894-46c6-b48b-09c7c0c8545b 00:17:26.759 04:13:28 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:26.759 { 00:17:26.759 "name": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:26.759 "aliases": [ 00:17:26.759 "lvs/nvme0n1p0" 00:17:26.759 ], 00:17:26.759 "product_name": "Logical Volume", 00:17:26.759 "block_size": 4096, 00:17:26.759 "num_blocks": 26476544, 00:17:26.759 "uuid": "c39160ad-c894-46c6-b48b-09c7c0c8545b", 00:17:26.759 "assigned_rate_limits": { 00:17:26.759 "rw_ios_per_sec": 0, 00:17:26.759 "rw_mbytes_per_sec": 0, 00:17:26.759 "r_mbytes_per_sec": 0, 00:17:26.759 "w_mbytes_per_sec": 0 00:17:26.759 }, 00:17:26.759 "claimed": false, 00:17:26.759 "zoned": false, 00:17:26.759 "supported_io_types": { 00:17:26.759 "read": true, 00:17:26.759 "write": true, 00:17:26.759 "unmap": true, 00:17:26.759 "write_zeroes": true, 00:17:26.759 "flush": false, 00:17:26.759 "reset": true, 00:17:26.759 "compare": false, 00:17:26.759 "compare_and_write": false, 00:17:26.759 "abort": false, 00:17:26.759 "nvme_admin": false, 00:17:26.759 "nvme_io": false 00:17:26.759 }, 00:17:26.759 "driver_specific": { 00:17:26.759 "lvol": { 00:17:26.759 "lvol_store_uuid": "2c12d8aa-252f-45f6-bd4e-80b207b89121", 00:17:26.759 "base_bdev": "nvme0n1", 00:17:26.759 "thin_provision": true, 00:17:26.759 "snapshot": false, 00:17:26.759 "clone": false, 00:17:26.759 "esnap_clone": false 00:17:26.759 } 00:17:26.759 } 00:17:26.759 } 00:17:26.759 ]' 00:17:26.759 04:13:28 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:26.759 04:13:28 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:26.759 04:13:28 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:27.020 04:13:28 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:27.020 04:13:28 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:27.020 04:13:28 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c39160ad-c894-46c6-b48b-09c7c0c8545b --l2p_dram_limit 10' 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:27.020 04:13:28 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c39160ad-c894-46c6-b48b-09c7c0c8545b --l2p_dram_limit 10 -c nvc0n1p0 00:17:27.020 [2024-11-26 04:13:28.716256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.716299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.020 [2024-11-26 04:13:28.716312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.020 [2024-11-26 04:13:28.716318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.716362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.716370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.020 [2024-11-26 04:13:28.716382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:27.020 [2024-11-26 04:13:28.716387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.716407] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.020 [2024-11-26 04:13:28.716937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.020 [2024-11-26 04:13:28.717049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.717078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.020 [2024-11-26 04:13:28.717111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:17:27.020 [2024-11-26 04:13:28.717134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.717364] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e 00:17:27.020 [2024-11-26 04:13:28.719325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.719420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:27.020 [2024-11-26 04:13:28.719457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:27.020 [2024-11-26 04:13:28.719485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.727632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.727708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.020 [2024-11-26 04:13:28.727735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.994 ms 00:17:27.020 [2024-11-26 04:13:28.727765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.727984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.728038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.020 [2024-11-26 04:13:28.728077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:27.020 [2024-11-26 04:13:28.728102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.728251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.728308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.020 [2024-11-26 04:13:28.728332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:27.020 [2024-11-26 04:13:28.728357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.728424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.020 [2024-11-26 04:13:28.730460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.730489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.020 [2024-11-26 04:13:28.730499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:17:27.020 [2024-11-26 04:13:28.730517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.730557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.730567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.020 [2024-11-26 04:13:28.730578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:27.020 [2024-11-26 04:13:28.730585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.730602] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:27.020 [2024-11-26 04:13:28.730710] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:27.020 [2024-11-26 04:13:28.730733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.020 [2024-11-26 04:13:28.730744] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:27.020 [2024-11-26 04:13:28.730761] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.020 [2024-11-26 04:13:28.730769] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.020 [2024-11-26 04:13:28.730779] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:27.020 [2024-11-26 04:13:28.730785] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.020 [2024-11-26 04:13:28.730794] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:27.020 [2024-11-26 04:13:28.730801] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:27.020 [2024-11-26 04:13:28.730810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.730817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.020 [2024-11-26 04:13:28.730826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:27.020 [2024-11-26 04:13:28.730835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.730899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.020 [2024-11-26 04:13:28.730912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.020 [2024-11-26 04:13:28.730921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:27.020 [2024-11-26 04:13:28.730927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.020 [2024-11-26 04:13:28.731003] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.020 [2024-11-26 04:13:28.731012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.020 [2024-11-26 04:13:28.731021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.020 [2024-11-26 04:13:28.731046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731061] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.020 [2024-11-26 04:13:28.731069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.020 [2024-11-26 04:13:28.731083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.020 [2024-11-26 04:13:28.731089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:27.020 [2024-11-26 04:13:28.731099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.020 [2024-11-26 04:13:28.731105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.020 [2024-11-26 04:13:28.731114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:27.020 [2024-11-26 04:13:28.731122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.020 [2024-11-26 04:13:28.731139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:27.020 [2024-11-26 04:13:28.731148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:27.020 [2024-11-26 04:13:28.731164] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:27.020 [2024-11-26 04:13:28.731172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.020 [2024-11-26 04:13:28.731189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.020 [2024-11-26 04:13:28.731215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.020 [2024-11-26 04:13:28.731240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.020 [2024-11-26 04:13:28.731264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:27.020 [2024-11-26 04:13:28.731271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:27.020 [2024-11-26 04:13:28.731280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.021 [2024-11-26 04:13:28.731287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:27.021 [2024-11-26 04:13:28.731296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.021 [2024-11-26 04:13:28.731303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.021 [2024-11-26 04:13:28.731312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:27.021 [2024-11-26 04:13:28.731319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.021 [2024-11-26 04:13:28.731328] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.021 [2024-11-26 04:13:28.731336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.021 [2024-11-26 04:13:28.731345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.021 [2024-11-26 04:13:28.731353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.021 [2024-11-26 04:13:28.731364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.021 [2024-11-26 04:13:28.731372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.021 [2024-11-26 04:13:28.731380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.021 [2024-11-26 04:13:28.731388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.021 [2024-11-26 04:13:28.731397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.021 [2024-11-26 04:13:28.731405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.021 [2024-11-26 04:13:28.731415] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.021 [2024-11-26 04:13:28.731425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.021 [2024-11-26 04:13:28.731440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:27.021 [2024-11-26 04:13:28.731448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:27.021 [2024-11-26 04:13:28.731458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:27.021 [2024-11-26 04:13:28.731466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:27.021 [2024-11-26 04:13:28.731476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:27.021 [2024-11-26 04:13:28.731484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:27.021 [2024-11-26 04:13:28.731493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:27.021 [2024-11-26 04:13:28.731516] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:27.021 [2024-11-26 04:13:28.731528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:27.021 [2024-11-26 04:13:28.731534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:27.021 [2024-11-26 04:13:28.731554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:27.021 [2024-11-26 04:13:28.731561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:27.021 [2024-11-26 04:13:28.731570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:27.021 [2024-11-26 04:13:28.731577] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.021 [2024-11-26 04:13:28.731587] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.021 [2024-11-26 04:13:28.731594] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.021 [2024-11-26 04:13:28.731603] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.021 [2024-11-26 04:13:28.731610] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.021 [2024-11-26 04:13:28.731618] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.021 [2024-11-26 04:13:28.731625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.731634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.021 [2024-11-26 04:13:28.731641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:17:27.021 [2024-11-26 04:13:28.731651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.737554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.737588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.021 [2024-11-26 04:13:28.737597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.863 ms 00:17:27.021 [2024-11-26 04:13:28.737606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.737721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.737736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.021 [2024-11-26 04:13:28.737745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:27.021 [2024-11-26 04:13:28.737753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.746359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.746395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.021 [2024-11-26 04:13:28.746404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.571 ms 00:17:27.021 [2024-11-26 04:13:28.746413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.746439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.746449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.021 [2024-11-26 04:13:28.746457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.021 [2024-11-26 04:13:28.746467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.746799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.746830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.021 [2024-11-26 04:13:28.746844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:27.021 [2024-11-26 04:13:28.746853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.746956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.746976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.021 [2024-11-26 04:13:28.746984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:27.021 [2024-11-26 04:13:28.746993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.752195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.752228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.021 [2024-11-26 04:13:28.752237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:17:27.021 [2024-11-26 04:13:28.752246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.021 [2024-11-26 04:13:28.760478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:27.021 [2024-11-26 04:13:28.763090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.021 [2024-11-26 04:13:28.763119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:27.021 [2024-11-26 04:13:28.763135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.791 ms 00:17:27.021 [2024-11-26 04:13:28.763145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.279 [2024-11-26 04:13:28.812525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.279 [2024-11-26 04:13:28.812574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:27.279 [2024-11-26 04:13:28.812588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.347 ms 00:17:27.279 [2024-11-26 04:13:28.812597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.279 [2024-11-26 04:13:28.812636] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:27.279 [2024-11-26 04:13:28.812663] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:29.180 [2024-11-26 04:13:30.888785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.888840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:29.180 [2024-11-26 04:13:30.888856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2076.137 ms 00:17:29.180 [2024-11-26 04:13:30.888865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.889056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.889078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.180 [2024-11-26 04:13:30.889088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:29.180 [2024-11-26 04:13:30.889096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.891937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.891972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:29.180 [2024-11-26 04:13:30.891987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:17:29.180 [2024-11-26 04:13:30.891999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.894314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.894345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:29.180 [2024-11-26 04:13:30.894357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:17:29.180 [2024-11-26 04:13:30.894363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.894545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.894564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.180 [2024-11-26 04:13:30.894575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:29.180 [2024-11-26 04:13:30.894582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.915283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.915327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:29.180 [2024-11-26 04:13:30.915341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.677 ms 00:17:29.180 [2024-11-26 04:13:30.915349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.919138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.919173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:29.180 [2024-11-26 04:13:30.919188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:17:29.180 [2024-11-26 04:13:30.919197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.920441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.920474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:29.180 [2024-11-26 04:13:30.920485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:17:29.180 [2024-11-26 04:13:30.920492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.923432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.923470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.180 [2024-11-26 04:13:30.923483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:17:29.180 [2024-11-26 04:13:30.923492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.923598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.923613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.180 [2024-11-26 04:13:30.923625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:29.180 [2024-11-26 04:13:30.923634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.923700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.180 [2024-11-26 04:13:30.923709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.180 [2024-11-26 04:13:30.923722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:29.180 [2024-11-26 04:13:30.923731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.180 [2024-11-26 04:13:30.924568] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2207.944 ms, result 0 00:17:29.180 { 00:17:29.180 "name": "ftl0", 00:17:29.180 "uuid": "6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e" 00:17:29.180 } 00:17:29.180 04:13:30 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:17:29.180 04:13:30 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:29.438 04:13:31 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:17:29.438 04:13:31 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:17:29.438 04:13:31 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:17:29.697 /dev/nbd0 00:17:29.697 04:13:31 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:17:29.697 04:13:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:29.697 04:13:31 -- common/autotest_common.sh@867 -- # local i 00:17:29.697 04:13:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:29.697 04:13:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:29.697 04:13:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:29.697 04:13:31 -- common/autotest_common.sh@871 -- # break 00:17:29.697 04:13:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:29.697 04:13:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:29.697 04:13:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:17:29.697 1+0 records in 00:17:29.697 1+0 records out 00:17:29.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150721 s, 27.2 MB/s 00:17:29.697 04:13:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:17:29.697 04:13:31 -- common/autotest_common.sh@884 -- # size=4096 00:17:29.697 04:13:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:17:29.697 04:13:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:29.697 04:13:31 -- common/autotest_common.sh@887 -- # return 0 00:17:29.697 04:13:31 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:17:29.697 [2024-11-26 04:13:31.408691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:29.697 [2024-11-26 04:13:31.408968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85127 ] 00:17:29.956 [2024-11-26 04:13:31.556563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.956 [2024-11-26 04:13:31.586475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.891  [2024-11-26T04:13:34.034Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-26T04:13:34.964Z] Copying: 394/1024 [MB] (197 MBps) [2024-11-26T04:13:35.894Z] Copying: 599/1024 [MB] (205 MBps) [2024-11-26T04:13:36.459Z] Copying: 860/1024 [MB] (260 MBps) [2024-11-26T04:13:36.459Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:17:34.691 00:17:34.691 04:13:36 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:36.592 04:13:38 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:17:36.592 [2024-11-26 04:13:38.097221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.592 [2024-11-26 04:13:38.097677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85200 ] 00:17:36.592 [2024-11-26 04:13:38.244030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.592 [2024-11-26 04:13:38.272327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.965  [2024-11-26T04:13:40.665Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-26T04:13:41.600Z] Copying: 67/1024 [MB] (34 MBps) [2024-11-26T04:13:42.536Z] Copying: 98/1024 [MB] (31 MBps) [2024-11-26T04:13:43.504Z] Copying: 131/1024 [MB] (33 MBps) [2024-11-26T04:13:44.439Z] Copying: 163/1024 [MB] (31 MBps) [2024-11-26T04:13:45.373Z] Copying: 194/1024 [MB] (30 MBps) [2024-11-26T04:13:46.749Z] Copying: 230/1024 [MB] (36 MBps) [2024-11-26T04:13:47.682Z] Copying: 264/1024 [MB] (34 MBps) [2024-11-26T04:13:48.615Z] Copying: 295/1024 [MB] (31 MBps) [2024-11-26T04:13:49.550Z] Copying: 326/1024 [MB] (30 MBps) [2024-11-26T04:13:50.483Z] Copying: 362/1024 [MB] (35 MBps) [2024-11-26T04:13:51.417Z] Copying: 393/1024 [MB] (30 MBps) [2024-11-26T04:13:52.351Z] Copying: 426/1024 [MB] (33 MBps) [2024-11-26T04:13:53.723Z] Copying: 458/1024 [MB] (31 MBps) [2024-11-26T04:13:54.656Z] Copying: 490/1024 [MB] (32 MBps) [2024-11-26T04:13:55.619Z] Copying: 521/1024 [MB] (30 MBps) [2024-11-26T04:13:56.553Z] Copying: 552/1024 [MB] (30 MBps) [2024-11-26T04:13:57.486Z] Copying: 586/1024 [MB] (34 MBps) [2024-11-26T04:13:58.420Z] Copying: 616/1024 [MB] (30 MBps) [2024-11-26T04:13:59.352Z] Copying: 647/1024 [MB] (31 MBps) [2024-11-26T04:14:00.725Z] Copying: 680/1024 [MB] (33 MBps) [2024-11-26T04:14:01.364Z] Copying: 711/1024 [MB] (30 MBps) [2024-11-26T04:14:02.748Z] Copying: 742/1024 [MB] (31 MBps) [2024-11-26T04:14:03.681Z] Copying: 773/1024 [MB] (30 MBps) [2024-11-26T04:14:04.616Z] Copying: 810/1024 [MB] (36 MBps) [2024-11-26T04:14:05.551Z] Copying: 843/1024 [MB] (33 MBps) [2024-11-26T04:14:06.487Z] Copying: 881/1024 [MB] (38 MBps) [2024-11-26T04:14:07.421Z] Copying: 916/1024 [MB] (34 MBps) [2024-11-26T04:14:08.355Z] Copying: 948/1024 [MB] (32 MBps) [2024-11-26T04:14:09.729Z] Copying: 981/1024 [MB] (32 MBps) [2024-11-26T04:14:09.729Z] Copying: 1015/1024 [MB] (34 MBps) [2024-11-26T04:14:09.988Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:18:08.220 00:18:08.220 04:14:09 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:18:08.220 04:14:09 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:18:08.220 04:14:09 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:08.481 [2024-11-26 04:14:10.107587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.107635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:08.481 [2024-11-26 04:14:10.107648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.481 [2024-11-26 04:14:10.107658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.107682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.481 [2024-11-26 04:14:10.108125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.108141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:08.481 [2024-11-26 04:14:10.108154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:18:08.481 [2024-11-26 04:14:10.108161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.110081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.110114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:08.481 [2024-11-26 04:14:10.110126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:18:08.481 [2024-11-26 04:14:10.110133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.124421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.124453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:08.481 [2024-11-26 04:14:10.124465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.264 ms 00:18:08.481 [2024-11-26 04:14:10.124473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.130662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.130800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:08.481 [2024-11-26 04:14:10.130821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.138 ms 00:18:08.481 [2024-11-26 04:14:10.130829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.132050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.132081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.481 [2024-11-26 04:14:10.132092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:18:08.481 [2024-11-26 04:14:10.132099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.136305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.136337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.481 [2024-11-26 04:14:10.136353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:18:08.481 [2024-11-26 04:14:10.136361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.136483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.136492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.481 [2024-11-26 04:14:10.136525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:08.481 [2024-11-26 04:14:10.136533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.138556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.138598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:08.481 [2024-11-26 04:14:10.138612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:18:08.481 [2024-11-26 04:14:10.138620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.139759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.139790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:08.481 [2024-11-26 04:14:10.139801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:18:08.481 [2024-11-26 04:14:10.139807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.140660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.140689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.481 [2024-11-26 04:14:10.140700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:18:08.481 [2024-11-26 04:14:10.140706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.141647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.481 [2024-11-26 04:14:10.141779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.481 [2024-11-26 04:14:10.141797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:18:08.481 [2024-11-26 04:14:10.141803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.481 [2024-11-26 04:14:10.141834] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.481 [2024-11-26 04:14:10.141849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.481 [2024-11-26 04:14:10.141863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.481 [2024-11-26 04:14:10.141870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.481 [2024-11-26 04:14:10.141880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.481 [2024-11-26 04:14:10.141887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.141994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.482 [2024-11-26 04:14:10.142450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.483 [2024-11-26 04:14:10.142714] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.483 [2024-11-26 04:14:10.142722] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e 00:18:08.483 [2024-11-26 04:14:10.142732] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.483 [2024-11-26 04:14:10.142742] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.483 [2024-11-26 04:14:10.142749] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.483 [2024-11-26 04:14:10.142762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.483 [2024-11-26 04:14:10.142768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.483 [2024-11-26 04:14:10.142779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.483 [2024-11-26 04:14:10.142786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.483 [2024-11-26 04:14:10.142793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.483 [2024-11-26 04:14:10.142799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.483 [2024-11-26 04:14:10.142807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.483 [2024-11-26 04:14:10.142814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.483 [2024-11-26 04:14:10.142823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:18:08.483 [2024-11-26 04:14:10.142830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.144209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.483 [2024-11-26 04:14:10.144228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.483 [2024-11-26 04:14:10.144238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:18:08.483 [2024-11-26 04:14:10.144246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.144299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.483 [2024-11-26 04:14:10.144307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.483 [2024-11-26 04:14:10.144316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:08.483 [2024-11-26 04:14:10.144323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.149752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.149848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.483 [2024-11-26 04:14:10.149934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.149956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.150022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.150048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.483 [2024-11-26 04:14:10.150106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.150128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.150196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.150220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.483 [2024-11-26 04:14:10.150241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.150314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.150345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.150366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.483 [2024-11-26 04:14:10.150386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.150437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.159453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.159620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.483 [2024-11-26 04:14:10.159677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.159699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.163262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.163379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.483 [2024-11-26 04:14:10.163432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.483 [2024-11-26 04:14:10.163454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.483 [2024-11-26 04:14:10.163570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.483 [2024-11-26 04:14:10.163598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.483 [2024-11-26 04:14:10.163655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.163676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.163719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.484 [2024-11-26 04:14:10.163768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.484 [2024-11-26 04:14:10.163792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.163811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.163917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.484 [2024-11-26 04:14:10.163991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.484 [2024-11-26 04:14:10.164044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.164066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.164127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.484 [2024-11-26 04:14:10.164152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.484 [2024-11-26 04:14:10.164173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.164219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.164274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.484 [2024-11-26 04:14:10.164330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.484 [2024-11-26 04:14:10.164354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.164396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.164458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.484 [2024-11-26 04:14:10.164513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.484 [2024-11-26 04:14:10.164563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.484 [2024-11-26 04:14:10.164584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.484 [2024-11-26 04:14:10.164730] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.105 ms, result 0 00:18:08.484 true 00:18:08.484 04:14:10 -- ftl/dirty_shutdown.sh@83 -- # kill -9 85002 00:18:08.484 04:14:10 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid85002 00:18:08.484 04:14:10 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:18:08.484 [2024-11-26 04:14:10.236953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:08.484 [2024-11-26 04:14:10.237203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85548 ] 00:18:08.742 [2024-11-26 04:14:10.385590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.742 [2024-11-26 04:14:10.415881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.117  [2024-11-26T04:14:12.820Z] Copying: 207/1024 [MB] (207 MBps) [2024-11-26T04:14:13.757Z] Copying: 466/1024 [MB] (259 MBps) [2024-11-26T04:14:14.692Z] Copying: 725/1024 [MB] (258 MBps) [2024-11-26T04:14:14.692Z] Copying: 983/1024 [MB] (258 MBps) [2024-11-26T04:14:14.950Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:18:13.182 00:18:13.182 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 85002 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:18:13.182 04:14:14 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.182 [2024-11-26 04:14:14.837200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:13.182 [2024-11-26 04:14:14.837308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85595 ] 00:18:13.441 [2024-11-26 04:14:14.983230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.441 [2024-11-26 04:14:15.011910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.441 [2024-11-26 04:14:15.092638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.441 [2024-11-26 04:14:15.092702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.441 [2024-11-26 04:14:15.151409] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:13.441 [2024-11-26 04:14:15.151732] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:13.441 [2024-11-26 04:14:15.151864] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:13.700 [2024-11-26 04:14:15.319720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.319765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.700 [2024-11-26 04:14:15.319777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:13.700 [2024-11-26 04:14:15.319783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.319836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.319844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.700 [2024-11-26 04:14:15.319851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:13.700 [2024-11-26 04:14:15.319861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.319877] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.700 [2024-11-26 04:14:15.320094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.700 [2024-11-26 04:14:15.320105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.320110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.700 [2024-11-26 04:14:15.320120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:18:13.700 [2024-11-26 04:14:15.320125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.321122] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:13.700 [2024-11-26 04:14:15.323264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.323297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:13.700 [2024-11-26 04:14:15.323306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:18:13.700 [2024-11-26 04:14:15.323312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.323362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.323369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:13.700 [2024-11-26 04:14:15.323376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:13.700 [2024-11-26 04:14:15.323382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.327918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.328037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.700 [2024-11-26 04:14:15.328050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:18:13.700 [2024-11-26 04:14:15.328060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.700 [2024-11-26 04:14:15.328116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.700 [2024-11-26 04:14:15.328123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.700 [2024-11-26 04:14:15.328132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:13.701 [2024-11-26 04:14:15.328140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.328180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.701 [2024-11-26 04:14:15.328187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.701 [2024-11-26 04:14:15.328197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.701 [2024-11-26 04:14:15.328205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.328222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.701 [2024-11-26 04:14:15.329394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.701 [2024-11-26 04:14:15.329419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.701 [2024-11-26 04:14:15.329430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:18:13.701 [2024-11-26 04:14:15.329435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.329465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.701 [2024-11-26 04:14:15.329472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.701 [2024-11-26 04:14:15.329478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:13.701 [2024-11-26 04:14:15.329483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.329498] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:13.701 [2024-11-26 04:14:15.329537] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:13.701 [2024-11-26 04:14:15.329563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:13.701 [2024-11-26 04:14:15.329579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:13.701 [2024-11-26 04:14:15.329644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:13.701 [2024-11-26 04:14:15.329651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.701 [2024-11-26 04:14:15.329659] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:13.701 [2024-11-26 04:14:15.329667] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329673] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329679] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:13.701 [2024-11-26 04:14:15.329688] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.701 [2024-11-26 04:14:15.329693] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:13.701 [2024-11-26 04:14:15.329703] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:13.701 [2024-11-26 04:14:15.329709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.701 [2024-11-26 04:14:15.329714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.701 [2024-11-26 04:14:15.329723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:18:13.701 [2024-11-26 04:14:15.329729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.329779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.701 [2024-11-26 04:14:15.329787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.701 [2024-11-26 04:14:15.329792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:13.701 [2024-11-26 04:14:15.329797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.701 [2024-11-26 04:14:15.329854] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.701 [2024-11-26 04:14:15.329861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.701 [2024-11-26 04:14:15.329870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.701 [2024-11-26 04:14:15.329886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.701 [2024-11-26 04:14:15.329904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.701 [2024-11-26 04:14:15.329919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.701 [2024-11-26 04:14:15.329924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:13.701 [2024-11-26 04:14:15.329929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.701 [2024-11-26 04:14:15.329934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.701 [2024-11-26 04:14:15.329939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:13.701 [2024-11-26 04:14:15.329946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.701 [2024-11-26 04:14:15.329955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:13.701 [2024-11-26 04:14:15.329960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:13.701 [2024-11-26 04:14:15.329970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:13.701 [2024-11-26 04:14:15.329976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.701 [2024-11-26 04:14:15.329989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:13.701 [2024-11-26 04:14:15.329994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:13.701 [2024-11-26 04:14:15.329999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.701 [2024-11-26 04:14:15.330004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:13.701 [2024-11-26 04:14:15.330009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:13.701 [2024-11-26 04:14:15.330014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.701 [2024-11-26 04:14:15.330018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:13.701 [2024-11-26 04:14:15.330023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:13.701 [2024-11-26 04:14:15.330028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.701 [2024-11-26 04:14:15.330033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:13.701 [2024-11-26 04:14:15.330038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:13.701 [2024-11-26 04:14:15.330043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.701 [2024-11-26 04:14:15.330049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:13.701 [2024-11-26 04:14:15.330054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.701 [2024-11-26 04:14:15.330060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.701 [2024-11-26 04:14:15.330069] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:13.701 [2024-11-26 04:14:15.330075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.701 [2024-11-26 04:14:15.330080] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.701 [2024-11-26 04:14:15.330089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.701 [2024-11-26 04:14:15.330095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.701 [2024-11-26 04:14:15.330102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.701 [2024-11-26 04:14:15.330109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.701 [2024-11-26 04:14:15.330114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.701 [2024-11-26 04:14:15.330120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.701 [2024-11-26 04:14:15.330129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.701 [2024-11-26 04:14:15.330135] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.701 [2024-11-26 04:14:15.330141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.701 [2024-11-26 04:14:15.330148] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.701 [2024-11-26 04:14:15.330156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.701 [2024-11-26 04:14:15.330166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:13.701 [2024-11-26 04:14:15.330172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:13.701 [2024-11-26 04:14:15.330180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:13.701 [2024-11-26 04:14:15.330186] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:13.701 [2024-11-26 04:14:15.330192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:13.701 [2024-11-26 04:14:15.330198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:13.701 [2024-11-26 04:14:15.330204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:13.701 [2024-11-26 04:14:15.330210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:13.701 [2024-11-26 04:14:15.330216] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:13.701 [2024-11-26 04:14:15.330222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:13.701 [2024-11-26 04:14:15.330228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:13.701 [2024-11-26 04:14:15.330234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:13.701 [2024-11-26 04:14:15.330241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:13.701 [2024-11-26 04:14:15.330247] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.702 [2024-11-26 04:14:15.330258] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.702 [2024-11-26 04:14:15.330265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.702 [2024-11-26 04:14:15.330271] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.702 [2024-11-26 04:14:15.330277] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.702 [2024-11-26 04:14:15.330286] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.702 [2024-11-26 04:14:15.330292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.330299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.702 [2024-11-26 04:14:15.330305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:18:13.702 [2024-11-26 04:14:15.330311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.335798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.335906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.702 [2024-11-26 04:14:15.335956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.459 ms 00:18:13.702 [2024-11-26 04:14:15.335975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.336056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.336077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.702 [2024-11-26 04:14:15.336177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:13.702 [2024-11-26 04:14:15.336195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.356112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.356237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.702 [2024-11-26 04:14:15.356287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.866 ms 00:18:13.702 [2024-11-26 04:14:15.356311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.356377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.356404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.702 [2024-11-26 04:14:15.356489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:13.702 [2024-11-26 04:14:15.356527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.356881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.356961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.702 [2024-11-26 04:14:15.357015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:13.702 [2024-11-26 04:14:15.357032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.357146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.357164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.702 [2024-11-26 04:14:15.357180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:13.702 [2024-11-26 04:14:15.357197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.362076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.362160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.702 [2024-11-26 04:14:15.362200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:18:13.702 [2024-11-26 04:14:15.362217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.365952] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:13.702 [2024-11-26 04:14:15.366267] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:13.702 [2024-11-26 04:14:15.366317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.366405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:13.702 [2024-11-26 04:14:15.366425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.966 ms 00:18:13.702 [2024-11-26 04:14:15.366445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.377865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.377969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:13.702 [2024-11-26 04:14:15.378047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.347 ms 00:18:13.702 [2024-11-26 04:14:15.378065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.379812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.379896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:13.702 [2024-11-26 04:14:15.379936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:18:13.702 [2024-11-26 04:14:15.379971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.381277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.381358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:13.702 [2024-11-26 04:14:15.381369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:18:13.702 [2024-11-26 04:14:15.381375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.381556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.381566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.702 [2024-11-26 04:14:15.381573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:13.702 [2024-11-26 04:14:15.381578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.397167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.397198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.702 [2024-11-26 04:14:15.397212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.575 ms 00:18:13.702 [2024-11-26 04:14:15.397218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.402937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:13.702 [2024-11-26 04:14:15.405327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.405349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.702 [2024-11-26 04:14:15.405358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.067 ms 00:18:13.702 [2024-11-26 04:14:15.405364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.405431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.405438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.702 [2024-11-26 04:14:15.405455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.702 [2024-11-26 04:14:15.405461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.405518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.405526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.702 [2024-11-26 04:14:15.405532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:13.702 [2024-11-26 04:14:15.405538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.406638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.406702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:13.702 [2024-11-26 04:14:15.406743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:18:13.702 [2024-11-26 04:14:15.406769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.406805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.406885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.702 [2024-11-26 04:14:15.406904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.702 [2024-11-26 04:14:15.406919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.406958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.702 [2024-11-26 04:14:15.406998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.407022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.702 [2024-11-26 04:14:15.407037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:13.702 [2024-11-26 04:14:15.407056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.410101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.410196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.702 [2024-11-26 04:14:15.410289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:18:13.702 [2024-11-26 04:14:15.410346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.410418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.702 [2024-11-26 04:14:15.410438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.702 [2024-11-26 04:14:15.410455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:13.702 [2024-11-26 04:14:15.410469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.702 [2024-11-26 04:14:15.411258] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 91.216 ms, result 0 00:18:15.078  [2024-11-26T04:14:17.780Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-26T04:14:18.714Z] Copying: 93/1024 [MB] (46 MBps) [2024-11-26T04:14:19.649Z] Copying: 139/1024 [MB] (46 MBps) [2024-11-26T04:14:20.623Z] Copying: 185/1024 [MB] (45 MBps) [2024-11-26T04:14:21.557Z] Copying: 234/1024 [MB] (49 MBps) [2024-11-26T04:14:22.492Z] Copying: 280/1024 [MB] (45 MBps) [2024-11-26T04:14:23.867Z] Copying: 325/1024 [MB] (45 MBps) [2024-11-26T04:14:24.432Z] Copying: 373/1024 [MB] (47 MBps) [2024-11-26T04:14:25.809Z] Copying: 423/1024 [MB] (50 MBps) [2024-11-26T04:14:26.742Z] Copying: 469/1024 [MB] (45 MBps) [2024-11-26T04:14:27.676Z] Copying: 516/1024 [MB] (46 MBps) [2024-11-26T04:14:28.610Z] Copying: 562/1024 [MB] (46 MBps) [2024-11-26T04:14:29.543Z] Copying: 612/1024 [MB] (50 MBps) [2024-11-26T04:14:30.478Z] Copying: 664/1024 [MB] (51 MBps) [2024-11-26T04:14:31.852Z] Copying: 710/1024 [MB] (45 MBps) [2024-11-26T04:14:32.787Z] Copying: 756/1024 [MB] (46 MBps) [2024-11-26T04:14:33.722Z] Copying: 802/1024 [MB] (46 MBps) [2024-11-26T04:14:34.656Z] Copying: 848/1024 [MB] (45 MBps) [2024-11-26T04:14:35.590Z] Copying: 894/1024 [MB] (45 MBps) [2024-11-26T04:14:36.523Z] Copying: 946/1024 [MB] (51 MBps) [2024-11-26T04:14:37.458Z] Copying: 992/1024 [MB] (46 MBps) [2024-11-26T04:14:38.395Z] Copying: 1023/1024 [MB] (31 MBps) [2024-11-26T04:14:38.395Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-26 04:14:38.219878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.220073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:36.627 [2024-11-26 04:14:38.220105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.627 [2024-11-26 04:14:38.220113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.222203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.627 [2024-11-26 04:14:38.223569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.223607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:36.627 [2024-11-26 04:14:38.223616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:18:36.627 [2024-11-26 04:14:38.223623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.235631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.235663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:36.627 [2024-11-26 04:14:38.235673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.051 ms 00:18:36.627 [2024-11-26 04:14:38.235681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.253546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.253577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:36.627 [2024-11-26 04:14:38.253587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.849 ms 00:18:36.627 [2024-11-26 04:14:38.253598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.259804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.259834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:36.627 [2024-11-26 04:14:38.259844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:18:36.627 [2024-11-26 04:14:38.259853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.261007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.261038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:36.627 [2024-11-26 04:14:38.261047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:18:36.627 [2024-11-26 04:14:38.261054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.264322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.264355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:36.627 [2024-11-26 04:14:38.264370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:18:36.627 [2024-11-26 04:14:38.264378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.314645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.314796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:36.627 [2024-11-26 04:14:38.314814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.234 ms 00:18:36.627 [2024-11-26 04:14:38.314821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.316586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.316617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:36.627 [2024-11-26 04:14:38.316626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:18:36.627 [2024-11-26 04:14:38.316633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.317731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.317769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:36.627 [2024-11-26 04:14:38.317782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:36.627 [2024-11-26 04:14:38.317793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.318659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.318689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:36.627 [2024-11-26 04:14:38.318698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:18:36.627 [2024-11-26 04:14:38.318705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.319518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.627 [2024-11-26 04:14:38.319546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:36.627 [2024-11-26 04:14:38.319554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:18:36.627 [2024-11-26 04:14:38.319560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.627 [2024-11-26 04:14:38.319586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:36.627 [2024-11-26 04:14:38.319606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:18:36.627 [2024-11-26 04:14:38.319616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:36.627 [2024-11-26 04:14:38.319834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.319993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:36.628 [2024-11-26 04:14:38.320351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:36.628 [2024-11-26 04:14:38.320359] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e 00:18:36.628 [2024-11-26 04:14:38.320369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:18:36.628 [2024-11-26 04:14:38.320375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128704 00:18:36.628 [2024-11-26 04:14:38.320382] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:18:36.628 [2024-11-26 04:14:38.320390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:18:36.628 [2024-11-26 04:14:38.320397] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:36.628 [2024-11-26 04:14:38.320404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:36.628 [2024-11-26 04:14:38.320411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:36.628 [2024-11-26 04:14:38.320417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:36.628 [2024-11-26 04:14:38.320423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:36.628 [2024-11-26 04:14:38.320430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.628 [2024-11-26 04:14:38.320437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:36.628 [2024-11-26 04:14:38.320444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:18:36.628 [2024-11-26 04:14:38.320451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.628 [2024-11-26 04:14:38.322228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.628 [2024-11-26 04:14:38.322331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:36.628 [2024-11-26 04:14:38.322383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:18:36.628 [2024-11-26 04:14:38.322405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.628 [2024-11-26 04:14:38.322536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.628 [2024-11-26 04:14:38.322603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:36.628 [2024-11-26 04:14:38.322658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:36.628 [2024-11-26 04:14:38.322679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.628 [2024-11-26 04:14:38.327706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.628 [2024-11-26 04:14:38.327828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.628 [2024-11-26 04:14:38.327881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.327902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.327964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.328226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.629 [2024-11-26 04:14:38.328547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.328747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.329100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.329379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.629 [2024-11-26 04:14:38.329670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.329848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.330031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.330176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.629 [2024-11-26 04:14:38.330270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.330292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.342363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.342490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.629 [2024-11-26 04:14:38.342564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.342587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.346135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.346243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.629 [2024-11-26 04:14:38.346297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.346324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.346396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.346437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.629 [2024-11-26 04:14:38.346493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.346532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.346573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.346714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.629 [2024-11-26 04:14:38.346746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.346766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.346856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.346887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.629 [2024-11-26 04:14:38.346911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.346930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.347020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.347078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:36.629 [2024-11-26 04:14:38.347125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.347147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.347198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.347337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.629 [2024-11-26 04:14:38.347368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.347395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.347492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.629 [2024-11-26 04:14:38.347537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.629 [2024-11-26 04:14:38.347557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.629 [2024-11-26 04:14:38.347575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.629 [2024-11-26 04:14:38.347731] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 131.675 ms, result 0 00:18:38.531 00:18:38.531 00:18:38.531 04:14:40 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:18:41.108 04:14:42 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:41.108 [2024-11-26 04:14:42.324655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:41.108 [2024-11-26 04:14:42.324902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85885 ] 00:18:41.108 [2024-11-26 04:14:42.471003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.108 [2024-11-26 04:14:42.502293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.108 [2024-11-26 04:14:42.587080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:41.108 [2024-11-26 04:14:42.587340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:41.108 [2024-11-26 04:14:42.732955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.733162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.108 [2024-11-26 04:14:42.733231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:41.108 [2024-11-26 04:14:42.733255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.733334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.733364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.108 [2024-11-26 04:14:42.733383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:41.108 [2024-11-26 04:14:42.733455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.733494] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.108 [2024-11-26 04:14:42.733780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.108 [2024-11-26 04:14:42.733826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.733851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.108 [2024-11-26 04:14:42.733910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:18:41.108 [2024-11-26 04:14:42.733937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.735125] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:41.108 [2024-11-26 04:14:42.737354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.737469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:41.108 [2024-11-26 04:14:42.737546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:18:41.108 [2024-11-26 04:14:42.737634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.737695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.737705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:41.108 [2024-11-26 04:14:42.737714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:41.108 [2024-11-26 04:14:42.737721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.742515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.742542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.108 [2024-11-26 04:14:42.742551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:18:41.108 [2024-11-26 04:14:42.742563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.742635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.742649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.108 [2024-11-26 04:14:42.742656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:41.108 [2024-11-26 04:14:42.742664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.108 [2024-11-26 04:14:42.742700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.108 [2024-11-26 04:14:42.742709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.109 [2024-11-26 04:14:42.742719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:41.109 [2024-11-26 04:14:42.742730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.109 [2024-11-26 04:14:42.742757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.109 [2024-11-26 04:14:42.744070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.109 [2024-11-26 04:14:42.744104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.109 [2024-11-26 04:14:42.744112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:18:41.109 [2024-11-26 04:14:42.744119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.109 [2024-11-26 04:14:42.744148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.109 [2024-11-26 04:14:42.744156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.109 [2024-11-26 04:14:42.744173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:41.109 [2024-11-26 04:14:42.744184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.109 [2024-11-26 04:14:42.744206] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:41.109 [2024-11-26 04:14:42.744223] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:41.109 [2024-11-26 04:14:42.744258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:41.109 [2024-11-26 04:14:42.744272] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:41.109 [2024-11-26 04:14:42.744343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:41.109 [2024-11-26 04:14:42.744355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.109 [2024-11-26 04:14:42.744367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:41.109 [2024-11-26 04:14:42.744376] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744385] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744392] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:41.109 [2024-11-26 04:14:42.744399] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.109 [2024-11-26 04:14:42.744409] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:41.109 [2024-11-26 04:14:42.744416] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:41.109 [2024-11-26 04:14:42.744428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.109 [2024-11-26 04:14:42.744436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.109 [2024-11-26 04:14:42.744445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:41.109 [2024-11-26 04:14:42.744453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.109 [2024-11-26 04:14:42.744529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.109 [2024-11-26 04:14:42.744538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.109 [2024-11-26 04:14:42.744545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:41.109 [2024-11-26 04:14:42.744552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.109 [2024-11-26 04:14:42.744623] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.109 [2024-11-26 04:14:42.744632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.109 [2024-11-26 04:14:42.744639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.109 [2024-11-26 04:14:42.744668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.109 [2024-11-26 04:14:42.744689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.109 [2024-11-26 04:14:42.744704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.109 [2024-11-26 04:14:42.744711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:41.109 [2024-11-26 04:14:42.744717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.109 [2024-11-26 04:14:42.744731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.109 [2024-11-26 04:14:42.744739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:41.109 [2024-11-26 04:14:42.744747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.109 [2024-11-26 04:14:42.744761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:41.109 [2024-11-26 04:14:42.744768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:41.109 [2024-11-26 04:14:42.744783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:41.109 [2024-11-26 04:14:42.744790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.109 [2024-11-26 04:14:42.744805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.109 [2024-11-26 04:14:42.744829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.109 [2024-11-26 04:14:42.744851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.109 [2024-11-26 04:14:42.744873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.109 [2024-11-26 04:14:42.744894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.109 [2024-11-26 04:14:42.744909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.109 [2024-11-26 04:14:42.744916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:41.109 [2024-11-26 04:14:42.744923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.109 [2024-11-26 04:14:42.744930] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.109 [2024-11-26 04:14:42.744940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.109 [2024-11-26 04:14:42.744948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.109 [2024-11-26 04:14:42.744956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.109 [2024-11-26 04:14:42.744964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.109 [2024-11-26 04:14:42.744972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.109 [2024-11-26 04:14:42.744980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.109 [2024-11-26 04:14:42.744988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.109 [2024-11-26 04:14:42.744995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.109 [2024-11-26 04:14:42.745003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.109 [2024-11-26 04:14:42.745011] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.109 [2024-11-26 04:14:42.745021] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.109 [2024-11-26 04:14:42.745030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:41.109 [2024-11-26 04:14:42.745038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:41.109 [2024-11-26 04:14:42.745046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:41.109 [2024-11-26 04:14:42.745054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:41.109 [2024-11-26 04:14:42.745062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:41.109 [2024-11-26 04:14:42.745072] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:41.109 [2024-11-26 04:14:42.745080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:41.109 [2024-11-26 04:14:42.745087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:41.109 [2024-11-26 04:14:42.745095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:41.109 [2024-11-26 04:14:42.745103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:41.109 [2024-11-26 04:14:42.745111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:41.109 [2024-11-26 04:14:42.745118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:41.109 [2024-11-26 04:14:42.745125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:41.109 [2024-11-26 04:14:42.745132] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.109 [2024-11-26 04:14:42.745140] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.109 [2024-11-26 04:14:42.745148] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.110 [2024-11-26 04:14:42.745155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.110 [2024-11-26 04:14:42.745162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.110 [2024-11-26 04:14:42.745169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.110 [2024-11-26 04:14:42.745177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.745184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.110 [2024-11-26 04:14:42.745193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:18:41.110 [2024-11-26 04:14:42.745201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.751455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.751576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.110 [2024-11-26 04:14:42.751638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:18:41.110 [2024-11-26 04:14:42.751722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.751818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.751850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:41.110 [2024-11-26 04:14:42.751900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:41.110 [2024-11-26 04:14:42.751922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.771146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.771320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.110 [2024-11-26 04:14:42.771400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.163 ms 00:18:41.110 [2024-11-26 04:14:42.771557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.771635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.771801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.110 [2024-11-26 04:14:42.771837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.110 [2024-11-26 04:14:42.771872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.772271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.772399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.110 [2024-11-26 04:14:42.772465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:41.110 [2024-11-26 04:14:42.772517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.772741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.772830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.110 [2024-11-26 04:14:42.772891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:41.110 [2024-11-26 04:14:42.772923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.778421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.778544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.110 [2024-11-26 04:14:42.778558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.329 ms 00:18:41.110 [2024-11-26 04:14:42.778566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.780857] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:18:41.110 [2024-11-26 04:14:42.780966] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:41.110 [2024-11-26 04:14:42.781032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.781053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:41.110 [2024-11-26 04:14:42.781072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:18:41.110 [2024-11-26 04:14:42.781177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.795589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.795701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:41.110 [2024-11-26 04:14:42.795762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.366 ms 00:18:41.110 [2024-11-26 04:14:42.795784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.799596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.799952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:41.110 [2024-11-26 04:14:42.800143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:18:41.110 [2024-11-26 04:14:42.800217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.803663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.803917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:41.110 [2024-11-26 04:14:42.804084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:18:41.110 [2024-11-26 04:14:42.804154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.804833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.805044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:41.110 [2024-11-26 04:14:42.805251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:18:41.110 [2024-11-26 04:14:42.805452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.825965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.826014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:41.110 [2024-11-26 04:14:42.826025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:18:41.110 [2024-11-26 04:14:42.826033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.833441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:41.110 [2024-11-26 04:14:42.836092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.836122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.110 [2024-11-26 04:14:42.836133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.996 ms 00:18:41.110 [2024-11-26 04:14:42.836141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.836211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.836220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:41.110 [2024-11-26 04:14:42.836229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:41.110 [2024-11-26 04:14:42.836236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.837405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.837438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:41.110 [2024-11-26 04:14:42.837453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:18:41.110 [2024-11-26 04:14:42.837463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.838761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.838881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:41.110 [2024-11-26 04:14:42.838901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:18:41.110 [2024-11-26 04:14:42.838908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.838939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.838951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:41.110 [2024-11-26 04:14:42.838961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.110 [2024-11-26 04:14:42.838972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.839002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:41.110 [2024-11-26 04:14:42.839015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.839022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:41.110 [2024-11-26 04:14:42.839030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:41.110 [2024-11-26 04:14:42.839036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.842217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.842329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:41.110 [2024-11-26 04:14:42.842343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:18:41.110 [2024-11-26 04:14:42.842355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.842419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.110 [2024-11-26 04:14:42.842428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:41.110 [2024-11-26 04:14:42.842439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:41.110 [2024-11-26 04:14:42.842450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.110 [2024-11-26 04:14:42.849723] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.693 ms, result 0 00:18:42.485  [2024-11-26T04:14:45.186Z] Copying: 1060/1048576 [kB] (1060 kBps) [2024-11-26T04:14:46.120Z] Copying: 6080/1048576 [kB] (5020 kBps) [2024-11-26T04:14:47.055Z] Copying: 58/1024 [MB] (52 MBps) [2024-11-26T04:14:48.429Z] Copying: 112/1024 [MB] (54 MBps) [2024-11-26T04:14:49.373Z] Copying: 165/1024 [MB] (53 MBps) [2024-11-26T04:14:50.308Z] Copying: 218/1024 [MB] (53 MBps) [2024-11-26T04:14:51.241Z] Copying: 273/1024 [MB] (54 MBps) [2024-11-26T04:14:52.175Z] Copying: 327/1024 [MB] (54 MBps) [2024-11-26T04:14:53.108Z] Copying: 384/1024 [MB] (56 MBps) [2024-11-26T04:14:54.045Z] Copying: 437/1024 [MB] (53 MBps) [2024-11-26T04:14:55.420Z] Copying: 490/1024 [MB] (53 MBps) [2024-11-26T04:14:56.355Z] Copying: 549/1024 [MB] (58 MBps) [2024-11-26T04:14:57.289Z] Copying: 602/1024 [MB] (53 MBps) [2024-11-26T04:14:58.223Z] Copying: 658/1024 [MB] (56 MBps) [2024-11-26T04:14:59.156Z] Copying: 711/1024 [MB] (52 MBps) [2024-11-26T04:15:00.087Z] Copying: 765/1024 [MB] (54 MBps) [2024-11-26T04:15:01.459Z] Copying: 818/1024 [MB] (53 MBps) [2024-11-26T04:15:02.394Z] Copying: 872/1024 [MB] (53 MBps) [2024-11-26T04:15:03.375Z] Copying: 926/1024 [MB] (54 MBps) [2024-11-26T04:15:03.943Z] Copying: 981/1024 [MB] (54 MBps) [2024-11-26T04:15:03.943Z] Copying: 1024/1024 [MB] (average 49 MBps)[2024-11-26 04:15:03.910659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.910711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:02.175 [2024-11-26 04:15:03.910729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:02.175 [2024-11-26 04:15:03.910738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.910759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.175 [2024-11-26 04:15:03.911213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.911231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:02.175 [2024-11-26 04:15:03.911240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:02.175 [2024-11-26 04:15:03.911253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.911467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.911476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:02.175 [2024-11-26 04:15:03.911484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:02.175 [2024-11-26 04:15:03.911522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.920143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.920178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.175 [2024-11-26 04:15:03.920188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.603 ms 00:19:02.175 [2024-11-26 04:15:03.920196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.927769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.927798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:02.175 [2024-11-26 04:15:03.927816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.538 ms 00:19:02.175 [2024-11-26 04:15:03.927824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.929051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.929182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.175 [2024-11-26 04:15:03.929196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:19:02.175 [2024-11-26 04:15:03.929204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.932575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.932608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.175 [2024-11-26 04:15:03.932618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:19:02.175 [2024-11-26 04:15:03.932626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.935687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.935801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.175 [2024-11-26 04:15:03.935823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:19:02.175 [2024-11-26 04:15:03.935831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.175 [2024-11-26 04:15:03.937383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.175 [2024-11-26 04:15:03.937415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:02.176 [2024-11-26 04:15:03.937424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:19:02.176 [2024-11-26 04:15:03.937430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.436 [2024-11-26 04:15:03.938413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.436 [2024-11-26 04:15:03.938443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:02.436 [2024-11-26 04:15:03.938452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:19:02.436 [2024-11-26 04:15:03.938458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.436 [2024-11-26 04:15:03.939470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.436 [2024-11-26 04:15:03.939605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.436 [2024-11-26 04:15:03.939619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:19:02.436 [2024-11-26 04:15:03.939626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.436 [2024-11-26 04:15:03.940556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.436 [2024-11-26 04:15:03.940579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.436 [2024-11-26 04:15:03.940587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:19:02.436 [2024-11-26 04:15:03.940594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.436 [2024-11-26 04:15:03.940621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.436 [2024-11-26 04:15:03.940634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:19:02.436 [2024-11-26 04:15:03.940644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:19:02.436 [2024-11-26 04:15:03.940652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.940996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.436 [2024-11-26 04:15:03.941141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.437 [2024-11-26 04:15:03.941374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.437 [2024-11-26 04:15:03.941388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e 00:19:02.437 [2024-11-26 04:15:03.941396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:19:02.437 [2024-11-26 04:15:03.941403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 138944 00:19:02.437 [2024-11-26 04:15:03.941412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 136960 00:19:02.437 [2024-11-26 04:15:03.941419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:19:02.437 [2024-11-26 04:15:03.941426] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.437 [2024-11-26 04:15:03.941434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.437 [2024-11-26 04:15:03.941441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.437 [2024-11-26 04:15:03.941447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.437 [2024-11-26 04:15:03.941453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.437 [2024-11-26 04:15:03.941460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.437 [2024-11-26 04:15:03.941467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.437 [2024-11-26 04:15:03.941475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:19:02.437 [2024-11-26 04:15:03.941481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.942884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.437 [2024-11-26 04:15:03.942987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.437 [2024-11-26 04:15:03.943001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:19:02.437 [2024-11-26 04:15:03.943009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.943061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.437 [2024-11-26 04:15:03.943076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.437 [2024-11-26 04:15:03.943084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:02.437 [2024-11-26 04:15:03.943091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.948182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.948215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.437 [2024-11-26 04:15:03.948231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.948239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.948290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.948298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.437 [2024-11-26 04:15:03.948306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.948316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.948380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.948389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.437 [2024-11-26 04:15:03.948396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.948403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.948417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.948424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.437 [2024-11-26 04:15:03.948432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.948438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.956841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.956879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.437 [2024-11-26 04:15:03.956889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.956896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.437 [2024-11-26 04:15:03.960568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.437 [2024-11-26 04:15:03.960631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.437 [2024-11-26 04:15:03.960691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.437 [2024-11-26 04:15:03.960778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.437 [2024-11-26 04:15:03.960833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.437 [2024-11-26 04:15:03.960892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.960939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.437 [2024-11-26 04:15:03.960949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.437 [2024-11-26 04:15:03.960957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.437 [2024-11-26 04:15:03.960964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.437 [2024-11-26 04:15:03.961082] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.400 ms, result 0 00:19:02.437 00:19:02.437 00:19:02.438 04:15:04 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:04.967 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:04.967 04:15:06 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:04.967 [2024-11-26 04:15:06.235847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:04.967 [2024-11-26 04:15:06.235973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86141 ] 00:19:04.967 [2024-11-26 04:15:06.385439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.967 [2024-11-26 04:15:06.416571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.967 [2024-11-26 04:15:06.500895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.967 [2024-11-26 04:15:06.500964] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.967 [2024-11-26 04:15:06.646872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.967 [2024-11-26 04:15:06.646923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.967 [2024-11-26 04:15:06.646936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.967 [2024-11-26 04:15:06.646944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.967 [2024-11-26 04:15:06.647000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.967 [2024-11-26 04:15:06.647010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.967 [2024-11-26 04:15:06.647018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.968 [2024-11-26 04:15:06.647028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.647047] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.968 [2024-11-26 04:15:06.647289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.968 [2024-11-26 04:15:06.647302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.647312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.968 [2024-11-26 04:15:06.647320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:04.968 [2024-11-26 04:15:06.647327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.648418] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.968 [2024-11-26 04:15:06.650580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.650725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.968 [2024-11-26 04:15:06.650748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:19:04.968 [2024-11-26 04:15:06.650756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.650807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.650816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.968 [2024-11-26 04:15:06.650824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:04.968 [2024-11-26 04:15:06.650832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.655618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.655651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.968 [2024-11-26 04:15:06.655662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.726 ms 00:19:04.968 [2024-11-26 04:15:06.655670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.655747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.655756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.968 [2024-11-26 04:15:06.655768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:04.968 [2024-11-26 04:15:06.655775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.655823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.655832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.968 [2024-11-26 04:15:06.655843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:04.968 [2024-11-26 04:15:06.655855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.655880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.968 [2024-11-26 04:15:06.657185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.657213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.968 [2024-11-26 04:15:06.657228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:19:04.968 [2024-11-26 04:15:06.657235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.657265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.657272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.968 [2024-11-26 04:15:06.657282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:04.968 [2024-11-26 04:15:06.657293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.657315] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.968 [2024-11-26 04:15:06.657336] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:04.968 [2024-11-26 04:15:06.657374] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.968 [2024-11-26 04:15:06.657389] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:04.968 [2024-11-26 04:15:06.657463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:04.968 [2024-11-26 04:15:06.657475] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.968 [2024-11-26 04:15:06.657491] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:04.968 [2024-11-26 04:15:06.657512] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.968 [2024-11-26 04:15:06.657521] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.968 [2024-11-26 04:15:06.657529] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.968 [2024-11-26 04:15:06.657536] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.968 [2024-11-26 04:15:06.657543] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:04.968 [2024-11-26 04:15:06.657550] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:04.968 [2024-11-26 04:15:06.657557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.657564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.968 [2024-11-26 04:15:06.657577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:04.968 [2024-11-26 04:15:06.657586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.657652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.968 [2024-11-26 04:15:06.657660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.968 [2024-11-26 04:15:06.657667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:04.968 [2024-11-26 04:15:06.657675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.968 [2024-11-26 04:15:06.657747] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.968 [2024-11-26 04:15:06.657756] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.968 [2024-11-26 04:15:06.657765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.968 [2024-11-26 04:15:06.657772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.968 [2024-11-26 04:15:06.657782] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.968 [2024-11-26 04:15:06.657788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.968 [2024-11-26 04:15:06.657795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.968 [2024-11-26 04:15:06.657801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.968 [2024-11-26 04:15:06.657808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.968 [2024-11-26 04:15:06.657815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.968 [2024-11-26 04:15:06.657822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.968 [2024-11-26 04:15:06.657828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.968 [2024-11-26 04:15:06.657836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.969 [2024-11-26 04:15:06.657850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.969 [2024-11-26 04:15:06.657858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:04.969 [2024-11-26 04:15:06.657868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.969 [2024-11-26 04:15:06.657875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.969 [2024-11-26 04:15:06.657883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:04.969 [2024-11-26 04:15:06.657890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.969 [2024-11-26 04:15:06.657897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:04.969 [2024-11-26 04:15:06.657904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:04.969 [2024-11-26 04:15:06.657912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:04.969 [2024-11-26 04:15:06.657919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.969 [2024-11-26 04:15:06.657927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.969 [2024-11-26 04:15:06.657934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.969 [2024-11-26 04:15:06.657942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.969 [2024-11-26 04:15:06.657949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:04.969 [2024-11-26 04:15:06.657956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.969 [2024-11-26 04:15:06.657963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.969 [2024-11-26 04:15:06.657970] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.969 [2024-11-26 04:15:06.657977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.969 [2024-11-26 04:15:06.657988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.969 [2024-11-26 04:15:06.657995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:04.969 [2024-11-26 04:15:06.658002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:04.969 [2024-11-26 04:15:06.658009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.969 [2024-11-26 04:15:06.658017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.969 [2024-11-26 04:15:06.658023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.969 [2024-11-26 04:15:06.658031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.969 [2024-11-26 04:15:06.658038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:04.969 [2024-11-26 04:15:06.658045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.969 [2024-11-26 04:15:06.658052] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.969 [2024-11-26 04:15:06.658060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.969 [2024-11-26 04:15:06.658068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.969 [2024-11-26 04:15:06.658078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.969 [2024-11-26 04:15:06.658088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.969 [2024-11-26 04:15:06.658096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.969 [2024-11-26 04:15:06.658103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.969 [2024-11-26 04:15:06.658112] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.969 [2024-11-26 04:15:06.658119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.969 [2024-11-26 04:15:06.658127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.969 [2024-11-26 04:15:06.658135] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.969 [2024-11-26 04:15:06.658148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.969 [2024-11-26 04:15:06.658157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.969 [2024-11-26 04:15:06.658165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:04.969 [2024-11-26 04:15:06.658172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:04.969 [2024-11-26 04:15:06.658180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:04.969 [2024-11-26 04:15:06.658188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:04.969 [2024-11-26 04:15:06.658196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:04.969 [2024-11-26 04:15:06.658204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:04.969 [2024-11-26 04:15:06.658211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:04.969 [2024-11-26 04:15:06.658219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:04.969 [2024-11-26 04:15:06.658227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:04.969 [2024-11-26 04:15:06.658235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:04.969 [2024-11-26 04:15:06.658244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:04.969 [2024-11-26 04:15:06.658253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:04.969 [2024-11-26 04:15:06.658260] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.969 [2024-11-26 04:15:06.658269] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.969 [2024-11-26 04:15:06.658277] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.969 [2024-11-26 04:15:06.658285] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.969 [2024-11-26 04:15:06.658293] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.969 [2024-11-26 04:15:06.658301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.969 [2024-11-26 04:15:06.658308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.969 [2024-11-26 04:15:06.658314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.969 [2024-11-26 04:15:06.658325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:19:04.969 [2024-11-26 04:15:06.658336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.969 [2024-11-26 04:15:06.664324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.664363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.970 [2024-11-26 04:15:06.664372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.949 ms 00:19:04.970 [2024-11-26 04:15:06.664379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.664461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.664474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.970 [2024-11-26 04:15:06.664481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:04.970 [2024-11-26 04:15:06.664488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.693375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.693592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.970 [2024-11-26 04:15:06.693635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.823 ms 00:19:04.970 [2024-11-26 04:15:06.693648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.693713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.693726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.970 [2024-11-26 04:15:06.693738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.970 [2024-11-26 04:15:06.693753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.694150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.694172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.970 [2024-11-26 04:15:06.694186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:04.970 [2024-11-26 04:15:06.694198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.694369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.694383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.970 [2024-11-26 04:15:06.694396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:04.970 [2024-11-26 04:15:06.694407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.700614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.700771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.970 [2024-11-26 04:15:06.700797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.176 ms 00:19:04.970 [2024-11-26 04:15:06.700808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.703081] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:04.970 [2024-11-26 04:15:06.703123] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.970 [2024-11-26 04:15:06.703133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.703141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.970 [2024-11-26 04:15:06.703149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:19:04.970 [2024-11-26 04:15:06.703156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.717773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.717825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.970 [2024-11-26 04:15:06.717837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.580 ms 00:19:04.970 [2024-11-26 04:15:06.717845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.719498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.719567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.970 [2024-11-26 04:15:06.719576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:19:04.970 [2024-11-26 04:15:06.719583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.721044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.721074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.970 [2024-11-26 04:15:06.721082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:19:04.970 [2024-11-26 04:15:06.721089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.970 [2024-11-26 04:15:06.721279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.970 [2024-11-26 04:15:06.721289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.970 [2024-11-26 04:15:06.721297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:04.970 [2024-11-26 04:15:06.721306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.739137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.739321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:05.229 [2024-11-26 04:15:06.739339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.815 ms 00:19:05.229 [2024-11-26 04:15:06.739347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.746713] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:05.229 [2024-11-26 04:15:06.749292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.749323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:05.229 [2024-11-26 04:15:06.749334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.901 ms 00:19:05.229 [2024-11-26 04:15:06.749342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.749413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.749423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:05.229 [2024-11-26 04:15:06.749442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:05.229 [2024-11-26 04:15:06.749450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.750045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.750163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:05.229 [2024-11-26 04:15:06.750178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:19:05.229 [2024-11-26 04:15:06.750185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.751462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.751491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:05.229 [2024-11-26 04:15:06.751515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:19:05.229 [2024-11-26 04:15:06.751523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.751555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.751562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.229 [2024-11-26 04:15:06.751573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.229 [2024-11-26 04:15:06.751581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.751611] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:05.229 [2024-11-26 04:15:06.751620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.751631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:05.229 [2024-11-26 04:15:06.751638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:05.229 [2024-11-26 04:15:06.751645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.754950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.754982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.229 [2024-11-26 04:15:06.754992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:19:05.229 [2024-11-26 04:15:06.755005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.755069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.229 [2024-11-26 04:15:06.755078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.229 [2024-11-26 04:15:06.755089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:05.229 [2024-11-26 04:15:06.755096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.229 [2024-11-26 04:15:06.756034] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.784 ms, result 0 00:19:06.602  [2024-11-26T04:15:08.935Z] Copying: 49/1024 [MB] (49 MBps) [2024-11-26T04:15:10.308Z] Copying: 101/1024 [MB] (52 MBps) [2024-11-26T04:15:11.242Z] Copying: 148/1024 [MB] (46 MBps) [2024-11-26T04:15:12.176Z] Copying: 198/1024 [MB] (50 MBps) [2024-11-26T04:15:13.109Z] Copying: 247/1024 [MB] (48 MBps) [2024-11-26T04:15:14.041Z] Copying: 297/1024 [MB] (50 MBps) [2024-11-26T04:15:14.975Z] Copying: 343/1024 [MB] (45 MBps) [2024-11-26T04:15:16.349Z] Copying: 392/1024 [MB] (49 MBps) [2024-11-26T04:15:17.281Z] Copying: 442/1024 [MB] (50 MBps) [2024-11-26T04:15:18.213Z] Copying: 490/1024 [MB] (47 MBps) [2024-11-26T04:15:19.145Z] Copying: 536/1024 [MB] (46 MBps) [2024-11-26T04:15:20.079Z] Copying: 588/1024 [MB] (51 MBps) [2024-11-26T04:15:21.047Z] Copying: 637/1024 [MB] (48 MBps) [2024-11-26T04:15:21.990Z] Copying: 675/1024 [MB] (38 MBps) [2024-11-26T04:15:22.933Z] Copying: 698/1024 [MB] (22 MBps) [2024-11-26T04:15:24.321Z] Copying: 716/1024 [MB] (18 MBps) [2024-11-26T04:15:25.261Z] Copying: 741/1024 [MB] (24 MBps) [2024-11-26T04:15:26.207Z] Copying: 773/1024 [MB] (32 MBps) [2024-11-26T04:15:27.151Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-26T04:15:28.096Z] Copying: 797/1024 [MB] (11 MBps) [2024-11-26T04:15:29.042Z] Copying: 810/1024 [MB] (13 MBps) [2024-11-26T04:15:29.991Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-26T04:15:30.937Z] Copying: 834/1024 [MB] (12 MBps) [2024-11-26T04:15:32.326Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-26T04:15:33.271Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-26T04:15:34.214Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-26T04:15:35.157Z] Copying: 884/1024 [MB] (15 MBps) [2024-11-26T04:15:36.105Z] Copying: 898/1024 [MB] (14 MBps) [2024-11-26T04:15:37.082Z] Copying: 915/1024 [MB] (16 MBps) [2024-11-26T04:15:38.022Z] Copying: 937/1024 [MB] (21 MBps) [2024-11-26T04:15:38.962Z] Copying: 956/1024 [MB] (19 MBps) [2024-11-26T04:15:40.346Z] Copying: 975/1024 [MB] (19 MBps) [2024-11-26T04:15:41.290Z] Copying: 997/1024 [MB] (21 MBps) [2024-11-26T04:15:42.234Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-26T04:15:42.496Z] Copying: 1020/1024 [MB] (11 MBps) [2024-11-26T04:15:42.496Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 04:15:42.404365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.728 [2024-11-26 04:15:42.404441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.728 [2024-11-26 04:15:42.404464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:40.728 [2024-11-26 04:15:42.404478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.404538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.729 [2024-11-26 04:15:42.405115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.405139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.729 [2024-11-26 04:15:42.405154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:19:40.729 [2024-11-26 04:15:42.405167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.405573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.405591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.729 [2024-11-26 04:15:42.405608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:19:40.729 [2024-11-26 04:15:42.405622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.412022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.412060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.729 [2024-11-26 04:15:42.412075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.362 ms 00:19:40.729 [2024-11-26 04:15:42.412088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.420112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.420251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:40.729 [2024-11-26 04:15:42.420270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.993 ms 00:19:40.729 [2024-11-26 04:15:42.420284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.422835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.422868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.729 [2024-11-26 04:15:42.422877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:19:40.729 [2024-11-26 04:15:42.422884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.426580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.426708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.729 [2024-11-26 04:15:42.426730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:19:40.729 [2024-11-26 04:15:42.426738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.435101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.435132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.729 [2024-11-26 04:15:42.435142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.341 ms 00:19:40.729 [2024-11-26 04:15:42.435150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.437618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.437743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:40.729 [2024-11-26 04:15:42.437757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:19:40.729 [2024-11-26 04:15:42.437766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.440163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.440206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:40.729 [2024-11-26 04:15:42.440217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:19:40.729 [2024-11-26 04:15:42.440224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.442129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.442162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.729 [2024-11-26 04:15:42.442171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:19:40.729 [2024-11-26 04:15:42.442177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.443984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.729 [2024-11-26 04:15:42.444027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.729 [2024-11-26 04:15:42.444037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:19:40.729 [2024-11-26 04:15:42.444045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.729 [2024-11-26 04:15:42.444081] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.729 [2024-11-26 04:15:42.444095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:19:40.729 [2024-11-26 04:15:42.444111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:19:40.729 [2024-11-26 04:15:42.444119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.729 [2024-11-26 04:15:42.444456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.730 [2024-11-26 04:15:42.444848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.730 [2024-11-26 04:15:42.444856] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6caa290c-be3b-49f4-bbc5-8a1ab8a54b7e 00:19:40.730 [2024-11-26 04:15:42.444864] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:19:40.730 [2024-11-26 04:15:42.444871] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.730 [2024-11-26 04:15:42.444878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.730 [2024-11-26 04:15:42.444889] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.730 [2024-11-26 04:15:42.444896] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.730 [2024-11-26 04:15:42.444904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.730 [2024-11-26 04:15:42.444911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.730 [2024-11-26 04:15:42.444917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.730 [2024-11-26 04:15:42.444923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.730 [2024-11-26 04:15:42.444930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.730 [2024-11-26 04:15:42.444938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.730 [2024-11-26 04:15:42.444946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:19:40.730 [2024-11-26 04:15:42.444955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.446583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.730 [2024-11-26 04:15:42.446672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.730 [2024-11-26 04:15:42.446718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:19:40.730 [2024-11-26 04:15:42.446741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.446827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.730 [2024-11-26 04:15:42.446855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.730 [2024-11-26 04:15:42.446875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:40.730 [2024-11-26 04:15:42.446893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.451954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.452067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.730 [2024-11-26 04:15:42.452115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.452137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.452194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.452219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.730 [2024-11-26 04:15:42.452237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.452256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.452361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.452429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.730 [2024-11-26 04:15:42.452748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.452854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.452930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.452989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.730 [2024-11-26 04:15:42.453022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.453133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.461493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.461663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.730 [2024-11-26 04:15:42.462009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.462053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.465771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.465811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.730 [2024-11-26 04:15:42.465821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.730 [2024-11-26 04:15:42.465829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.730 [2024-11-26 04:15:42.465870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.730 [2024-11-26 04:15:42.465879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.731 [2024-11-26 04:15:42.465887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.465894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.465935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.731 [2024-11-26 04:15:42.465944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.731 [2024-11-26 04:15:42.465951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.465962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.466023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.731 [2024-11-26 04:15:42.466032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.731 [2024-11-26 04:15:42.466040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.466047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.466083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.731 [2024-11-26 04:15:42.466092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.731 [2024-11-26 04:15:42.466100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.466110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.466149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.731 [2024-11-26 04:15:42.466158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.731 [2024-11-26 04:15:42.466166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.466173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.466222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.731 [2024-11-26 04:15:42.466232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.731 [2024-11-26 04:15:42.466240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.731 [2024-11-26 04:15:42.466250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.731 [2024-11-26 04:15:42.466357] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.983 ms, result 0 00:19:40.992 00:19:40.992 00:19:40.992 04:15:42 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:19:43.540 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:19:43.540 04:15:44 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:19:43.540 04:15:44 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:19:43.540 04:15:44 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.540 04:15:44 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:43.540 04:15:44 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:19:43.540 04:15:45 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:43.540 04:15:45 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:19:43.540 04:15:45 -- ftl/dirty_shutdown.sh@37 -- # killprocess 85002 00:19:43.540 04:15:45 -- common/autotest_common.sh@936 -- # '[' -z 85002 ']' 00:19:43.540 04:15:45 -- common/autotest_common.sh@940 -- # kill -0 85002 00:19:43.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85002) - No such process 00:19:43.540 Process with pid 85002 is not found 00:19:43.540 04:15:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85002 is not found' 00:19:43.540 04:15:45 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:19:43.540 Remove shared memory files 00:19:43.540 04:15:45 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:19:43.540 04:15:45 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:43.540 04:15:45 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:43.540 04:15:45 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:43.540 04:15:45 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:43.540 04:15:45 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:43.540 04:15:45 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:43.540 ************************************ 00:19:43.540 END TEST ftl_dirty_shutdown 00:19:43.540 ************************************ 00:19:43.540 00:19:43.540 real 2m20.206s 00:19:43.540 user 2m33.832s 00:19:43.540 sys 0m21.959s 00:19:43.540 04:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:43.540 04:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.802 04:15:45 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:19:43.802 04:15:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:43.802 04:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:43.802 ************************************ 00:19:43.802 START TEST ftl_upgrade_shutdown 00:19:43.802 ************************************ 00:19:43.802 04:15:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:19:43.802 * Looking for test storage... 00:19:43.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.802 04:15:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:43.802 04:15:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:43.802 04:15:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:43.802 04:15:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:43.802 04:15:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:43.802 04:15:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:43.802 04:15:45 -- scripts/common.sh@335 -- # IFS=.-: 00:19:43.802 04:15:45 -- scripts/common.sh@335 -- # read -ra ver1 00:19:43.802 04:15:45 -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.802 04:15:45 -- scripts/common.sh@336 -- # read -ra ver2 00:19:43.802 04:15:45 -- scripts/common.sh@337 -- # local 'op=<' 00:19:43.802 04:15:45 -- scripts/common.sh@339 -- # ver1_l=2 00:19:43.802 04:15:45 -- scripts/common.sh@340 -- # ver2_l=1 00:19:43.802 04:15:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:43.802 04:15:45 -- scripts/common.sh@343 -- # case "$op" in 00:19:43.802 04:15:45 -- scripts/common.sh@344 -- # : 1 00:19:43.802 04:15:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:43.802 04:15:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.802 04:15:45 -- scripts/common.sh@364 -- # decimal 1 00:19:43.802 04:15:45 -- scripts/common.sh@352 -- # local d=1 00:19:43.802 04:15:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.802 04:15:45 -- scripts/common.sh@354 -- # echo 1 00:19:43.802 04:15:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:43.802 04:15:45 -- scripts/common.sh@365 -- # decimal 2 00:19:43.802 04:15:45 -- scripts/common.sh@352 -- # local d=2 00:19:43.802 04:15:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.802 04:15:45 -- scripts/common.sh@354 -- # echo 2 00:19:43.802 04:15:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:43.802 04:15:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:43.802 04:15:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:43.802 04:15:45 -- scripts/common.sh@367 -- # return 0 00:19:43.802 04:15:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:43.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.802 --rc genhtml_branch_coverage=1 00:19:43.802 --rc genhtml_function_coverage=1 00:19:43.802 --rc genhtml_legend=1 00:19:43.802 --rc geninfo_all_blocks=1 00:19:43.802 --rc geninfo_unexecuted_blocks=1 00:19:43.802 00:19:43.802 ' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:43.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.802 --rc genhtml_branch_coverage=1 00:19:43.802 --rc genhtml_function_coverage=1 00:19:43.802 --rc genhtml_legend=1 00:19:43.802 --rc geninfo_all_blocks=1 00:19:43.802 --rc geninfo_unexecuted_blocks=1 00:19:43.802 00:19:43.802 ' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:43.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.802 --rc genhtml_branch_coverage=1 00:19:43.802 --rc genhtml_function_coverage=1 00:19:43.802 --rc genhtml_legend=1 00:19:43.802 --rc geninfo_all_blocks=1 00:19:43.802 --rc geninfo_unexecuted_blocks=1 00:19:43.802 00:19:43.802 ' 00:19:43.802 04:15:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:43.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.802 --rc genhtml_branch_coverage=1 00:19:43.802 --rc genhtml_function_coverage=1 00:19:43.802 --rc genhtml_legend=1 00:19:43.802 --rc geninfo_all_blocks=1 00:19:43.802 --rc geninfo_unexecuted_blocks=1 00:19:43.802 00:19:43.802 ' 00:19:43.802 04:15:45 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:43.802 04:15:45 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:19:43.803 04:15:45 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.803 04:15:45 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.803 04:15:45 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:43.803 04:15:45 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:43.803 04:15:45 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.803 04:15:45 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:43.803 04:15:45 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:43.803 04:15:45 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.803 04:15:45 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.803 04:15:45 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:43.803 04:15:45 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:43.803 04:15:45 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.803 04:15:45 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.803 04:15:45 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:43.803 04:15:45 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:43.803 04:15:45 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.803 04:15:45 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.803 04:15:45 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:43.803 04:15:45 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:43.803 04:15:45 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.803 04:15:45 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.803 04:15:45 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.803 04:15:45 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.803 04:15:45 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:43.803 04:15:45 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:43.803 04:15:45 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.803 04:15:45 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:19:43.803 04:15:45 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:19:43.803 04:15:45 -- ftl/common.sh@81 -- # local base_bdev= 00:19:43.803 04:15:45 -- ftl/common.sh@82 -- # local cache_bdev= 00:19:43.803 04:15:45 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:19:43.803 04:15:45 -- ftl/common.sh@89 -- # spdk_tgt_pid=86616 00:19:43.803 04:15:45 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:19:43.803 04:15:45 -- ftl/common.sh@91 -- # waitforlisten 86616 00:19:43.803 04:15:45 -- common/autotest_common.sh@829 -- # '[' -z 86616 ']' 00:19:43.803 04:15:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.803 04:15:45 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:19:43.803 04:15:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.803 04:15:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.803 04:15:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.803 04:15:45 -- common/autotest_common.sh@10 -- # set +x 00:19:44.063 [2024-11-26 04:15:45.586119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:44.063 [2024-11-26 04:15:45.586388] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86616 ] 00:19:44.063 [2024-11-26 04:15:45.734329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.063 [2024-11-26 04:15:45.765848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.063 [2024-11-26 04:15:45.766142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.636 04:15:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.897 04:15:46 -- common/autotest_common.sh@862 -- # return 0 00:19:44.897 04:15:46 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:19:44.897 04:15:46 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:19:44.897 04:15:46 -- ftl/common.sh@99 -- # local params 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:19:44.897 04:15:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:19:44.897 04:15:46 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:19:44.897 04:15:46 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:19:44.897 04:15:46 -- ftl/common.sh@54 -- # local name=base 00:19:44.897 04:15:46 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:44.897 04:15:46 -- ftl/common.sh@56 -- # local size=20480 00:19:44.897 04:15:46 -- ftl/common.sh@59 -- # local base_bdev 00:19:44.897 04:15:46 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:19:45.159 04:15:46 -- ftl/common.sh@60 -- # base_bdev=basen1 00:19:45.159 04:15:46 -- ftl/common.sh@62 -- # local base_size 00:19:45.159 04:15:46 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:19:45.159 04:15:46 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:19:45.159 04:15:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:45.159 04:15:46 -- common/autotest_common.sh@1369 -- # local bs 00:19:45.159 04:15:46 -- common/autotest_common.sh@1370 -- # local nb 00:19:45.159 04:15:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:19:45.159 04:15:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:45.159 { 00:19:45.159 "name": "basen1", 00:19:45.159 "aliases": [ 00:19:45.159 "300ab476-7e58-4704-9a6e-ae3d0300b6ae" 00:19:45.159 ], 00:19:45.159 "product_name": "NVMe disk", 00:19:45.159 "block_size": 4096, 00:19:45.159 "num_blocks": 1310720, 00:19:45.159 "uuid": "300ab476-7e58-4704-9a6e-ae3d0300b6ae", 00:19:45.159 "assigned_rate_limits": { 00:19:45.159 "rw_ios_per_sec": 0, 00:19:45.159 "rw_mbytes_per_sec": 0, 00:19:45.159 "r_mbytes_per_sec": 0, 00:19:45.159 "w_mbytes_per_sec": 0 00:19:45.159 }, 00:19:45.159 "claimed": true, 00:19:45.159 "claim_type": "read_many_write_one", 00:19:45.159 "zoned": false, 00:19:45.159 "supported_io_types": { 00:19:45.159 "read": true, 00:19:45.159 "write": true, 00:19:45.159 "unmap": true, 00:19:45.159 "write_zeroes": true, 00:19:45.159 "flush": true, 00:19:45.159 "reset": true, 00:19:45.159 "compare": true, 00:19:45.159 "compare_and_write": false, 00:19:45.159 "abort": true, 00:19:45.159 "nvme_admin": true, 00:19:45.159 "nvme_io": true 00:19:45.159 }, 00:19:45.159 "driver_specific": { 00:19:45.159 "nvme": [ 00:19:45.159 { 00:19:45.159 "pci_address": "0000:00:07.0", 00:19:45.159 "trid": { 00:19:45.159 "trtype": "PCIe", 00:19:45.159 "traddr": "0000:00:07.0" 00:19:45.159 }, 00:19:45.159 "ctrlr_data": { 00:19:45.159 "cntlid": 0, 00:19:45.159 "vendor_id": "0x1b36", 00:19:45.159 "model_number": "QEMU NVMe Ctrl", 00:19:45.159 "serial_number": "12341", 00:19:45.159 "firmware_revision": "8.0.0", 00:19:45.159 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:45.159 "oacs": { 00:19:45.159 "security": 0, 00:19:45.159 "format": 1, 00:19:45.159 "firmware": 0, 00:19:45.159 "ns_manage": 1 00:19:45.159 }, 00:19:45.159 "multi_ctrlr": false, 00:19:45.159 "ana_reporting": false 00:19:45.159 }, 00:19:45.159 "vs": { 00:19:45.159 "nvme_version": "1.4" 00:19:45.159 }, 00:19:45.159 "ns_data": { 00:19:45.159 "id": 1, 00:19:45.159 "can_share": false 00:19:45.159 } 00:19:45.159 } 00:19:45.159 ], 00:19:45.159 "mp_policy": "active_passive" 00:19:45.159 } 00:19:45.159 } 00:19:45.159 ]' 00:19:45.159 04:15:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:45.159 04:15:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:45.159 04:15:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:45.159 04:15:46 -- common/autotest_common.sh@1373 -- # nb=1310720 00:19:45.159 04:15:46 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:19:45.159 04:15:46 -- common/autotest_common.sh@1377 -- # echo 5120 00:19:45.159 04:15:46 -- ftl/common.sh@63 -- # base_size=5120 00:19:45.159 04:15:46 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:19:45.159 04:15:46 -- ftl/common.sh@67 -- # clear_lvols 00:19:45.421 04:15:46 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.421 04:15:46 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:45.421 04:15:47 -- ftl/common.sh@28 -- # stores=2c12d8aa-252f-45f6-bd4e-80b207b89121 00:19:45.421 04:15:47 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:45.421 04:15:47 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c12d8aa-252f-45f6-bd4e-80b207b89121 00:19:45.682 04:15:47 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:19:45.941 04:15:47 -- ftl/common.sh@68 -- # lvs=d13057d6-7ca3-41be-be83-ff5bbd5f0f0e 00:19:45.941 04:15:47 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d13057d6-7ca3-41be-be83-ff5bbd5f0f0e 00:19:45.941 04:15:47 -- ftl/common.sh@107 -- # base_bdev=03955c72-f218-4871-a638-74bd43c063bf 00:19:45.941 04:15:47 -- ftl/common.sh@108 -- # [[ -z 03955c72-f218-4871-a638-74bd43c063bf ]] 00:19:45.941 04:15:47 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 03955c72-f218-4871-a638-74bd43c063bf 5120 00:19:45.941 04:15:47 -- ftl/common.sh@35 -- # local name=cache 00:19:45.941 04:15:47 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:45.941 04:15:47 -- ftl/common.sh@37 -- # local base_bdev=03955c72-f218-4871-a638-74bd43c063bf 00:19:45.941 04:15:47 -- ftl/common.sh@38 -- # local cache_size=5120 00:19:45.941 04:15:47 -- ftl/common.sh@41 -- # get_bdev_size 03955c72-f218-4871-a638-74bd43c063bf 00:19:45.941 04:15:47 -- common/autotest_common.sh@1367 -- # local bdev_name=03955c72-f218-4871-a638-74bd43c063bf 00:19:45.941 04:15:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:19:45.941 04:15:47 -- common/autotest_common.sh@1369 -- # local bs 00:19:45.941 04:15:47 -- common/autotest_common.sh@1370 -- # local nb 00:19:45.941 04:15:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 03955c72-f218-4871-a638-74bd43c063bf 00:19:46.199 04:15:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:19:46.199 { 00:19:46.199 "name": "03955c72-f218-4871-a638-74bd43c063bf", 00:19:46.199 "aliases": [ 00:19:46.199 "lvs/basen1p0" 00:19:46.199 ], 00:19:46.199 "product_name": "Logical Volume", 00:19:46.199 "block_size": 4096, 00:19:46.199 "num_blocks": 5242880, 00:19:46.199 "uuid": "03955c72-f218-4871-a638-74bd43c063bf", 00:19:46.199 "assigned_rate_limits": { 00:19:46.199 "rw_ios_per_sec": 0, 00:19:46.199 "rw_mbytes_per_sec": 0, 00:19:46.199 "r_mbytes_per_sec": 0, 00:19:46.199 "w_mbytes_per_sec": 0 00:19:46.199 }, 00:19:46.199 "claimed": false, 00:19:46.199 "zoned": false, 00:19:46.199 "supported_io_types": { 00:19:46.199 "read": true, 00:19:46.199 "write": true, 00:19:46.200 "unmap": true, 00:19:46.200 "write_zeroes": true, 00:19:46.200 "flush": false, 00:19:46.200 "reset": true, 00:19:46.200 "compare": false, 00:19:46.200 "compare_and_write": false, 00:19:46.200 "abort": false, 00:19:46.200 "nvme_admin": false, 00:19:46.200 "nvme_io": false 00:19:46.200 }, 00:19:46.200 "driver_specific": { 00:19:46.200 "lvol": { 00:19:46.200 "lvol_store_uuid": "d13057d6-7ca3-41be-be83-ff5bbd5f0f0e", 00:19:46.200 "base_bdev": "basen1", 00:19:46.200 "thin_provision": true, 00:19:46.200 "snapshot": false, 00:19:46.200 "clone": false, 00:19:46.200 "esnap_clone": false 00:19:46.200 } 00:19:46.200 } 00:19:46.200 } 00:19:46.200 ]' 00:19:46.200 04:15:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:19:46.200 04:15:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:19:46.200 04:15:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:19:46.200 04:15:47 -- common/autotest_common.sh@1373 -- # nb=5242880 00:19:46.200 04:15:47 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:19:46.200 04:15:47 -- common/autotest_common.sh@1377 -- # echo 20480 00:19:46.200 04:15:47 -- ftl/common.sh@41 -- # local base_size=1024 00:19:46.200 04:15:47 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:46.200 04:15:47 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:19:46.458 04:15:48 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:19:46.458 04:15:48 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:19:46.458 04:15:48 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:19:46.718 04:15:48 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:19:46.718 04:15:48 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:19:46.718 04:15:48 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 03955c72-f218-4871-a638-74bd43c063bf -c cachen1p0 --l2p_dram_limit 2 00:19:46.980 [2024-11-26 04:15:48.560844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.560894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:19:46.980 [2024-11-26 04:15:48.560910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:19:46.980 [2024-11-26 04:15:48.560918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.560981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.560991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:19:46.980 [2024-11-26 04:15:48.561003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:19:46.980 [2024-11-26 04:15:48.561010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.561034] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:19:46.980 [2024-11-26 04:15:48.561318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:19:46.980 [2024-11-26 04:15:48.561335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.561343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:19:46.980 [2024-11-26 04:15:48.561352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:19:46.980 [2024-11-26 04:15:48.561360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.561395] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c0590d80-07f1-4bef-bc8c-dfb33ce726e5 00:19:46.980 [2024-11-26 04:15:48.562495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.562535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:19:46.980 [2024-11-26 04:15:48.562544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:19:46.980 [2024-11-26 04:15:48.562553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.567691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.567846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:19:46.980 [2024-11-26 04:15:48.567863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.058 ms 00:19:46.980 [2024-11-26 04:15:48.567878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.567919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.567930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:19:46.980 [2024-11-26 04:15:48.567938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:19:46.980 [2024-11-26 04:15:48.567946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.567991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.568002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:19:46.980 [2024-11-26 04:15:48.568012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:19:46.980 [2024-11-26 04:15:48.568020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.568044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:19:46.980 [2024-11-26 04:15:48.569677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.569703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:19:46.980 [2024-11-26 04:15:48.569714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.637 ms 00:19:46.980 [2024-11-26 04:15:48.569721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.980 [2024-11-26 04:15:48.569748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.980 [2024-11-26 04:15:48.569756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:19:46.981 [2024-11-26 04:15:48.569770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:19:46.981 [2024-11-26 04:15:48.569777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.981 [2024-11-26 04:15:48.569794] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:19:46.981 [2024-11-26 04:15:48.569907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:19:46.981 [2024-11-26 04:15:48.569919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:19:46.981 [2024-11-26 04:15:48.569931] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:19:46.981 [2024-11-26 04:15:48.569944] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:19:46.981 [2024-11-26 04:15:48.569952] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:19:46.981 [2024-11-26 04:15:48.569961] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:19:46.981 [2024-11-26 04:15:48.569968] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:19:46.981 [2024-11-26 04:15:48.569984] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:19:46.981 [2024-11-26 04:15:48.569990] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:19:46.981 [2024-11-26 04:15:48.569999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.981 [2024-11-26 04:15:48.570006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:19:46.981 [2024-11-26 04:15:48.570015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:19:46.981 [2024-11-26 04:15:48.570022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.981 [2024-11-26 04:15:48.570095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.981 [2024-11-26 04:15:48.570102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:19:46.981 [2024-11-26 04:15:48.570111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:19:46.981 [2024-11-26 04:15:48.570118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.981 [2024-11-26 04:15:48.570194] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:19:46.981 [2024-11-26 04:15:48.570203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:19:46.981 [2024-11-26 04:15:48.570213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:19:46.981 [2024-11-26 04:15:48.570239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:19:46.981 [2024-11-26 04:15:48.570253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:19:46.981 [2024-11-26 04:15:48.570261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:19:46.981 [2024-11-26 04:15:48.570267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:19:46.981 [2024-11-26 04:15:48.570283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:19:46.981 [2024-11-26 04:15:48.570295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:19:46.981 [2024-11-26 04:15:48.570310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:19:46.981 [2024-11-26 04:15:48.570332] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:19:46.981 [2024-11-26 04:15:48.570341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:19:46.981 [2024-11-26 04:15:48.570357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:19:46.981 [2024-11-26 04:15:48.570365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:19:46.981 [2024-11-26 04:15:48.570380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:19:46.981 [2024-11-26 04:15:48.570405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:19:46.981 [2024-11-26 04:15:48.570430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:19:46.981 [2024-11-26 04:15:48.570455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:19:46.981 [2024-11-26 04:15:48.570478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:19:46.981 [2024-11-26 04:15:48.570513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570529] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:19:46.981 [2024-11-26 04:15:48.570542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:19:46.981 [2024-11-26 04:15:48.570551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:19:46.981 [2024-11-26 04:15:48.570571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:19:46.981 [2024-11-26 04:15:48.570579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:19:46.981 [2024-11-26 04:15:48.570588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:19:46.981 [2024-11-26 04:15:48.570596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:19:46.981 [2024-11-26 04:15:48.570605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:19:46.981 [2024-11-26 04:15:48.570612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:19:46.981 [2024-11-26 04:15:48.570624] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:19:46.981 [2024-11-26 04:15:48.570634] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:19:46.981 [2024-11-26 04:15:48.570652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:19:46.981 [2024-11-26 04:15:48.570679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:19:46.981 [2024-11-26 04:15:48.570687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:19:46.981 [2024-11-26 04:15:48.570696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:19:46.981 [2024-11-26 04:15:48.570703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:19:46.981 [2024-11-26 04:15:48.570744] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:19:46.981 [2024-11-26 04:15:48.570750] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:19:46.981 [2024-11-26 04:15:48.570760] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.981 [2024-11-26 04:15:48.570776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:19:46.981 [2024-11-26 04:15:48.570783] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:19:46.981 [2024-11-26 04:15:48.570791] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:19:46.981 [2024-11-26 04:15:48.570798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.981 [2024-11-26 04:15:48.570806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:19:46.981 [2024-11-26 04:15:48.570813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.648 ms 00:19:46.981 [2024-11-26 04:15:48.570822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.981 [2024-11-26 04:15:48.576687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.981 [2024-11-26 04:15:48.576809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:19:46.981 [2024-11-26 04:15:48.576874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.823 ms 00:19:46.981 [2024-11-26 04:15:48.576906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.981 [2024-11-26 04:15:48.576957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.576980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:19:46.982 [2024-11-26 04:15:48.577000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:19:46.982 [2024-11-26 04:15:48.577020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.585786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.585900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:19:46.982 [2024-11-26 04:15:48.585969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.717 ms 00:19:46.982 [2024-11-26 04:15:48.586001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.586039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.586061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:19:46.982 [2024-11-26 04:15:48.586081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:19:46.982 [2024-11-26 04:15:48.586105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.586450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.586512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:19:46.982 [2024-11-26 04:15:48.586603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:19:46.982 [2024-11-26 04:15:48.586631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.586688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.586711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:19:46.982 [2024-11-26 04:15:48.586730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:19:46.982 [2024-11-26 04:15:48.586750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.591952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.592059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:19:46.982 [2024-11-26 04:15:48.592119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.173 ms 00:19:46.982 [2024-11-26 04:15:48.592151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.600573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:19:46.982 [2024-11-26 04:15:48.601461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.601576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:19:46.982 [2024-11-26 04:15:48.601659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.231 ms 00:19:46.982 [2024-11-26 04:15:48.601700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.617517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:46.982 [2024-11-26 04:15:48.617644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:19:46.982 [2024-11-26 04:15:48.617734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.759 ms 00:19:46.982 [2024-11-26 04:15:48.617765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:46.982 [2024-11-26 04:15:48.617828] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:19:46.982 [2024-11-26 04:15:48.617868] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:19:50.281 [2024-11-26 04:15:51.703992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.704220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:19:50.281 [2024-11-26 04:15:51.704363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3086.145 ms 00:19:50.281 [2024-11-26 04:15:51.704397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.704517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.704685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:19:50.281 [2024-11-26 04:15:51.704739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:19:50.281 [2024-11-26 04:15:51.704758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.708380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.708493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:19:50.281 [2024-11-26 04:15:51.708523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.580 ms 00:19:50.281 [2024-11-26 04:15:51.708532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.712425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.712457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:19:50.281 [2024-11-26 04:15:51.712468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.817 ms 00:19:50.281 [2024-11-26 04:15:51.712475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.712661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.712671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:19:50.281 [2024-11-26 04:15:51.712681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:19:50.281 [2024-11-26 04:15:51.712688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.736641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.736677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:19:50.281 [2024-11-26 04:15:51.736689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.905 ms 00:19:50.281 [2024-11-26 04:15:51.736697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.741135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.741169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:19:50.281 [2024-11-26 04:15:51.741183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.401 ms 00:19:50.281 [2024-11-26 04:15:51.741195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.742462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.742496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:19:50.281 [2024-11-26 04:15:51.742529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:19:50.281 [2024-11-26 04:15:51.742536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.746538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.281 [2024-11-26 04:15:51.746567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:19:50.281 [2024-11-26 04:15:51.746578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.966 ms 00:19:50.281 [2024-11-26 04:15:51.746584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.281 [2024-11-26 04:15:51.746623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.282 [2024-11-26 04:15:51.746633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:19:50.282 [2024-11-26 04:15:51.746643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:19:50.282 [2024-11-26 04:15:51.746654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.282 [2024-11-26 04:15:51.746721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:19:50.282 [2024-11-26 04:15:51.746729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:19:50.282 [2024-11-26 04:15:51.746741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:19:50.282 [2024-11-26 04:15:51.746748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:19:50.282 [2024-11-26 04:15:51.747586] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3186.343 ms, result 0 00:19:50.282 { 00:19:50.282 "name": "ftl", 00:19:50.282 "uuid": "c0590d80-07f1-4bef-bc8c-dfb33ce726e5" 00:19:50.282 } 00:19:50.282 04:15:51 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:19:50.282 [2024-11-26 04:15:51.905864] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.282 04:15:51 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:19:50.543 04:15:52 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:19:50.543 [2024-11-26 04:15:52.274282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:19:50.543 04:15:52 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:19:50.803 [2024-11-26 04:15:52.462660] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:50.804 04:15:52 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:19:51.063 Fill FTL, iteration 1 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:19:51.063 04:15:52 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:19:51.063 04:15:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:19:51.063 04:15:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:19:51.063 04:15:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:19:51.063 04:15:52 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:19:51.063 04:15:52 -- ftl/common.sh@163 -- # spdk_ini_pid=86732 00:19:51.063 04:15:52 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:19:51.063 04:15:52 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:19:51.063 04:15:52 -- ftl/common.sh@165 -- # waitforlisten 86732 /var/tmp/spdk.tgt.sock 00:19:51.063 04:15:52 -- common/autotest_common.sh@829 -- # '[' -z 86732 ']' 00:19:51.063 04:15:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:19:51.063 04:15:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.063 04:15:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:19:51.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:19:51.063 04:15:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.063 04:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:51.324 [2024-11-26 04:15:52.837705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:51.324 [2024-11-26 04:15:52.837932] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86732 ] 00:19:51.324 [2024-11-26 04:15:52.981285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.324 [2024-11-26 04:15:53.012324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.324 [2024-11-26 04:15:53.012693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.911 04:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.911 04:15:53 -- common/autotest_common.sh@862 -- # return 0 00:19:51.911 04:15:53 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:19:52.182 ftln1 00:19:52.182 04:15:53 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:19:52.182 04:15:53 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:19:52.444 04:15:54 -- ftl/common.sh@173 -- # echo ']}' 00:19:52.444 04:15:54 -- ftl/common.sh@176 -- # killprocess 86732 00:19:52.444 04:15:54 -- common/autotest_common.sh@936 -- # '[' -z 86732 ']' 00:19:52.444 04:15:54 -- common/autotest_common.sh@940 -- # kill -0 86732 00:19:52.444 04:15:54 -- common/autotest_common.sh@941 -- # uname 00:19:52.444 04:15:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.444 04:15:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86732 00:19:52.444 killing process with pid 86732 00:19:52.444 04:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:52.444 04:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:52.444 04:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86732' 00:19:52.444 04:15:54 -- common/autotest_common.sh@955 -- # kill 86732 00:19:52.444 04:15:54 -- common/autotest_common.sh@960 -- # wait 86732 00:19:52.705 04:15:54 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:19:52.705 04:15:54 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:19:52.705 [2024-11-26 04:15:54.394222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:52.705 [2024-11-26 04:15:54.394305] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86764 ] 00:19:52.967 [2024-11-26 04:15:54.536861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.967 [2024-11-26 04:15:54.567825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.349  [2024-11-26T04:15:57.060Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-26T04:15:58.002Z] Copying: 403/1024 [MB] (198 MBps) [2024-11-26T04:15:58.936Z] Copying: 599/1024 [MB] (196 MBps) [2024-11-26T04:15:59.502Z] Copying: 848/1024 [MB] (249 MBps) [2024-11-26T04:15:59.761Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:19:57.993 00:19:57.993 Calculate MD5 checksum, iteration 1 00:19:57.993 04:15:59 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:19:57.993 04:15:59 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:19:57.993 04:15:59 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:19:57.993 04:15:59 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:19:57.993 04:15:59 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:19:57.993 04:15:59 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:19:57.993 04:15:59 -- ftl/common.sh@154 -- # return 0 00:19:57.993 04:15:59 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:19:57.993 [2024-11-26 04:15:59.624708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:57.993 [2024-11-26 04:15:59.624822] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86822 ] 00:19:58.250 [2024-11-26 04:15:59.771393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.250 [2024-11-26 04:15:59.799457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.622  [2024-11-26T04:16:01.648Z] Copying: 701/1024 [MB] (701 MBps) [2024-11-26T04:16:01.648Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:19:59.880 00:19:59.880 04:16:01 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:19:59.880 04:16:01 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ac476747a76b8d5cc05d41c38863383c 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:20:02.415 Fill FTL, iteration 2 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:20:02.415 04:16:03 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:20:02.415 04:16:03 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:02.415 04:16:03 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:02.415 04:16:03 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:02.415 04:16:03 -- ftl/common.sh@154 -- # return 0 00:20:02.415 04:16:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:20:02.415 [2024-11-26 04:16:03.710033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.415 [2024-11-26 04:16:03.710136] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86871 ] 00:20:02.415 [2024-11-26 04:16:03.855008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.415 [2024-11-26 04:16:03.882730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.348  [2024-11-26T04:16:06.490Z] Copying: 267/1024 [MB] (267 MBps) [2024-11-26T04:16:07.424Z] Copying: 530/1024 [MB] (263 MBps) [2024-11-26T04:16:07.989Z] Copying: 794/1024 [MB] (264 MBps) [2024-11-26T04:16:08.248Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:20:06.480 00:20:06.480 04:16:08 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:20:06.480 04:16:08 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:20:06.480 Calculate MD5 checksum, iteration 2 00:20:06.480 04:16:08 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:06.480 04:16:08 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:06.480 04:16:08 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:06.480 04:16:08 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:06.480 04:16:08 -- ftl/common.sh@154 -- # return 0 00:20:06.480 04:16:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:06.480 [2024-11-26 04:16:08.144105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:06.480 [2024-11-26 04:16:08.144206] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86918 ] 00:20:06.739 [2024-11-26 04:16:08.291375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.739 [2024-11-26 04:16:08.318404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.111  [2024-11-26T04:16:10.502Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-26T04:16:13.035Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:20:11.267 00:20:11.267 04:16:12 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:20:11.267 04:16:12 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4c60ae80fb7a9da2cdd657fbdcb6a07f 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:20:13.164 [2024-11-26 04:16:14.841956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.164 [2024-11-26 04:16:14.842081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:20:13.164 [2024-11-26 04:16:14.842137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:20:13.164 [2024-11-26 04:16:14.842157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.164 [2024-11-26 04:16:14.842218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.164 [2024-11-26 04:16:14.842239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:20:13.164 [2024-11-26 04:16:14.842259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:13.164 [2024-11-26 04:16:14.842276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.164 [2024-11-26 04:16:14.842301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.164 [2024-11-26 04:16:14.842317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:20:13.164 [2024-11-26 04:16:14.842333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:20:13.164 [2024-11-26 04:16:14.842347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.164 [2024-11-26 04:16:14.842408] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.438 ms, result 0 00:20:13.164 true 00:20:13.164 04:16:14 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:13.423 { 00:20:13.423 "name": "ftl", 00:20:13.423 "properties": [ 00:20:13.423 { 00:20:13.423 "name": "superblock_version", 00:20:13.423 "value": 5, 00:20:13.423 "read-only": true 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "name": "base_device", 00:20:13.423 "bands": [ 00:20:13.423 { 00:20:13.423 "id": 0, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 1, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 2, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 3, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 4, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 5, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 6, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 7, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 8, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 9, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 10, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 11, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 12, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 13, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 14, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 15, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 16, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 17, 00:20:13.423 "state": "FREE", 00:20:13.423 "validity": 0.0 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "read-only": true 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "name": "cache_device", 00:20:13.423 "type": "bdev", 00:20:13.423 "chunks": [ 00:20:13.423 { 00:20:13.423 "id": 0, 00:20:13.423 "state": "CLOSED", 00:20:13.423 "utilization": 1.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 1, 00:20:13.423 "state": "CLOSED", 00:20:13.423 "utilization": 1.0 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 2, 00:20:13.423 "state": "OPEN", 00:20:13.423 "utilization": 0.001953125 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "id": 3, 00:20:13.423 "state": "OPEN", 00:20:13.423 "utilization": 0.0 00:20:13.423 } 00:20:13.423 ], 00:20:13.423 "read-only": true 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "name": "verbose_mode", 00:20:13.423 "value": true, 00:20:13.423 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:20:13.423 }, 00:20:13.423 { 00:20:13.423 "name": "prep_upgrade_on_shutdown", 00:20:13.423 "value": false, 00:20:13.423 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:20:13.423 } 00:20:13.423 ] 00:20:13.423 } 00:20:13.423 04:16:15 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:20:13.423 [2024-11-26 04:16:15.130211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.423 [2024-11-26 04:16:15.130355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:20:13.423 [2024-11-26 04:16:15.130396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:20:13.423 [2024-11-26 04:16:15.130413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.423 [2024-11-26 04:16:15.130444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.423 [2024-11-26 04:16:15.130460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:20:13.423 [2024-11-26 04:16:15.130475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:13.423 [2024-11-26 04:16:15.130489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.423 [2024-11-26 04:16:15.130521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.423 [2024-11-26 04:16:15.130538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:20:13.423 [2024-11-26 04:16:15.130554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:20:13.423 [2024-11-26 04:16:15.130589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.423 [2024-11-26 04:16:15.130648] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.426 ms, result 0 00:20:13.423 true 00:20:13.423 04:16:15 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:20:13.423 04:16:15 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:20:13.423 04:16:15 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:13.682 04:16:15 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:20:13.682 04:16:15 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:20:13.682 04:16:15 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:20:13.941 [2024-11-26 04:16:15.510550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.941 [2024-11-26 04:16:15.510685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:20:13.941 [2024-11-26 04:16:15.510699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:20:13.941 [2024-11-26 04:16:15.510704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.941 [2024-11-26 04:16:15.510724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.941 [2024-11-26 04:16:15.510730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:20:13.941 [2024-11-26 04:16:15.510736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:13.941 [2024-11-26 04:16:15.510741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.941 [2024-11-26 04:16:15.510756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:13.941 [2024-11-26 04:16:15.510762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:20:13.941 [2024-11-26 04:16:15.510767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:20:13.941 [2024-11-26 04:16:15.510773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:13.941 [2024-11-26 04:16:15.510820] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.257 ms, result 0 00:20:13.941 true 00:20:13.941 04:16:15 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:13.941 { 00:20:13.941 "name": "ftl", 00:20:13.941 "properties": [ 00:20:13.941 { 00:20:13.941 "name": "superblock_version", 00:20:13.941 "value": 5, 00:20:13.941 "read-only": true 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "name": "base_device", 00:20:13.941 "bands": [ 00:20:13.941 { 00:20:13.941 "id": 0, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 1, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 2, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 3, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 4, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 5, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 6, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 7, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 8, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 9, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 10, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 11, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 12, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 13, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 14, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 15, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 16, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 17, 00:20:13.941 "state": "FREE", 00:20:13.941 "validity": 0.0 00:20:13.941 } 00:20:13.941 ], 00:20:13.941 "read-only": true 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "name": "cache_device", 00:20:13.941 "type": "bdev", 00:20:13.941 "chunks": [ 00:20:13.941 { 00:20:13.941 "id": 0, 00:20:13.941 "state": "CLOSED", 00:20:13.941 "utilization": 1.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 1, 00:20:13.941 "state": "CLOSED", 00:20:13.941 "utilization": 1.0 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 2, 00:20:13.941 "state": "OPEN", 00:20:13.941 "utilization": 0.001953125 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "id": 3, 00:20:13.941 "state": "OPEN", 00:20:13.941 "utilization": 0.0 00:20:13.941 } 00:20:13.941 ], 00:20:13.941 "read-only": true 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "name": "verbose_mode", 00:20:13.941 "value": true, 00:20:13.941 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:20:13.941 }, 00:20:13.941 { 00:20:13.941 "name": "prep_upgrade_on_shutdown", 00:20:13.941 "value": true, 00:20:13.941 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:20:13.941 } 00:20:13.941 ] 00:20:13.941 } 00:20:13.941 04:16:15 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:20:13.941 04:16:15 -- ftl/common.sh@130 -- # [[ -n 86616 ]] 00:20:13.941 04:16:15 -- ftl/common.sh@131 -- # killprocess 86616 00:20:13.941 04:16:15 -- common/autotest_common.sh@936 -- # '[' -z 86616 ']' 00:20:13.941 04:16:15 -- common/autotest_common.sh@940 -- # kill -0 86616 00:20:13.941 04:16:15 -- common/autotest_common.sh@941 -- # uname 00:20:13.941 04:16:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:13.941 04:16:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86616 00:20:13.941 killing process with pid 86616 00:20:13.941 04:16:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:13.941 04:16:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:13.941 04:16:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86616' 00:20:13.941 04:16:15 -- common/autotest_common.sh@955 -- # kill 86616 00:20:13.941 04:16:15 -- common/autotest_common.sh@960 -- # wait 86616 00:20:14.198 [2024-11-26 04:16:15.779750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:20:14.198 [2024-11-26 04:16:15.783837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:14.198 [2024-11-26 04:16:15.783936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:20:14.198 [2024-11-26 04:16:15.783983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:14.198 [2024-11-26 04:16:15.784071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:14.198 [2024-11-26 04:16:15.784102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:20:14.198 [2024-11-26 04:16:15.784468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:14.198 [2024-11-26 04:16:15.784488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:20:14.198 [2024-11-26 04:16:15.784496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:20:14.198 [2024-11-26 04:16:15.784511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.040363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.040421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:20:22.332 [2024-11-26 04:16:24.040436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8255.804 ms 00:20:22.332 [2024-11-26 04:16:24.040444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.041703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.041722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:20:22.332 [2024-11-26 04:16:24.041737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.243 ms 00:20:22.332 [2024-11-26 04:16:24.041744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.042862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.042981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:20:22.332 [2024-11-26 04:16:24.042997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.093 ms 00:20:22.332 [2024-11-26 04:16:24.043004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.044409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.044439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:20:22.332 [2024-11-26 04:16:24.044447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.363 ms 00:20:22.332 [2024-11-26 04:16:24.044455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.046682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.046722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:20:22.332 [2024-11-26 04:16:24.046731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.199 ms 00:20:22.332 [2024-11-26 04:16:24.046739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.046807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.046816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:20:22.332 [2024-11-26 04:16:24.046824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:20:22.332 [2024-11-26 04:16:24.046831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.047900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.047930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:20:22.332 [2024-11-26 04:16:24.047939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.054 ms 00:20:22.332 [2024-11-26 04:16:24.047945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.049313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.049344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:20:22.332 [2024-11-26 04:16:24.049352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.340 ms 00:20:22.332 [2024-11-26 04:16:24.049359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.050553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.050584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:20:22.332 [2024-11-26 04:16:24.050592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.166 ms 00:20:22.332 [2024-11-26 04:16:24.050598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.051593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.332 [2024-11-26 04:16:24.051622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:20:22.332 [2024-11-26 04:16:24.051631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.939 ms 00:20:22.332 [2024-11-26 04:16:24.051638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.332 [2024-11-26 04:16:24.051664] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:20:22.332 [2024-11-26 04:16:24.051677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:22.332 [2024-11-26 04:16:24.051692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:20:22.332 [2024-11-26 04:16:24.051700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:20:22.332 [2024-11-26 04:16:24.051714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:22.332 [2024-11-26 04:16:24.051825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:20:22.332 [2024-11-26 04:16:24.051833] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c0590d80-07f1-4bef-bc8c-dfb33ce726e5 00:20:22.332 [2024-11-26 04:16:24.051840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:20:22.332 [2024-11-26 04:16:24.051847] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:20:22.332 [2024-11-26 04:16:24.051857] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:20:22.332 [2024-11-26 04:16:24.051865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:20:22.332 [2024-11-26 04:16:24.051872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:20:22.332 [2024-11-26 04:16:24.051879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:20:22.332 [2024-11-26 04:16:24.051886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:20:22.332 [2024-11-26 04:16:24.051892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:20:22.332 [2024-11-26 04:16:24.051897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:20:22.333 [2024-11-26 04:16:24.051905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.333 [2024-11-26 04:16:24.051915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:20:22.333 [2024-11-26 04:16:24.051924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:20:22.333 [2024-11-26 04:16:24.051932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.053321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.333 [2024-11-26 04:16:24.053339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:20:22.333 [2024-11-26 04:16:24.053348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.374 ms 00:20:22.333 [2024-11-26 04:16:24.053355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.053407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:22.333 [2024-11-26 04:16:24.053415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:20:22.333 [2024-11-26 04:16:24.053427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:20:22.333 [2024-11-26 04:16:24.053434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.058627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.058750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:20:22.333 [2024-11-26 04:16:24.058813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.058835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.058876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.058983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:20:22.333 [2024-11-26 04:16:24.059012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.059031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.059096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.059189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:20:22.333 [2024-11-26 04:16:24.059213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.059231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.059273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.059364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:20:22.333 [2024-11-26 04:16:24.059387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.059410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.068432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.068588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:20:22.333 [2024-11-26 04:16:24.068641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.068663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.072264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.072376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:20:22.333 [2024-11-26 04:16:24.072461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.072485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.072545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.072633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:20:22.333 [2024-11-26 04:16:24.072656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.072675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.072727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.072814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:20:22.333 [2024-11-26 04:16:24.072837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.072856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.072938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.072967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:20:22.333 [2024-11-26 04:16:24.072986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.073004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.073094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.073153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:20:22.333 [2024-11-26 04:16:24.073196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.073249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.073308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.073371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:20:22.333 [2024-11-26 04:16:24.073395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.073413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.073468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:22.333 [2024-11-26 04:16:24.073526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:20:22.333 [2024-11-26 04:16:24.073554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:22.333 [2024-11-26 04:16:24.073572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:22.333 [2024-11-26 04:16:24.073746] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8289.849 ms, result 0 00:20:22.899 04:16:24 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:20:22.899 04:16:24 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:20:22.899 04:16:24 -- ftl/common.sh@81 -- # local base_bdev= 00:20:22.899 04:16:24 -- ftl/common.sh@82 -- # local cache_bdev= 00:20:22.899 04:16:24 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:22.899 04:16:24 -- ftl/common.sh@89 -- # spdk_tgt_pid=87112 00:20:22.899 04:16:24 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:20:22.899 04:16:24 -- ftl/common.sh@91 -- # waitforlisten 87112 00:20:22.899 04:16:24 -- common/autotest_common.sh@829 -- # '[' -z 87112 ']' 00:20:22.899 04:16:24 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:22.899 04:16:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.899 04:16:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.899 04:16:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.899 04:16:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.899 04:16:24 -- common/autotest_common.sh@10 -- # set +x 00:20:23.157 [2024-11-26 04:16:24.700382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:23.157 [2024-11-26 04:16:24.700692] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87112 ] 00:20:23.157 [2024-11-26 04:16:24.841972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.157 [2024-11-26 04:16:24.872647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.157 [2024-11-26 04:16:24.873003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.416 [2024-11-26 04:16:25.110888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:20:23.416 [2024-11-26 04:16:25.111141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:20:23.676 [2024-11-26 04:16:25.247134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.247309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:20:23.676 [2024-11-26 04:16:25.247377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:20:23.676 [2024-11-26 04:16:25.247404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.247477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.247515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:20:23.676 [2024-11-26 04:16:25.247543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:20:23.676 [2024-11-26 04:16:25.247607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.247647] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:20:23.676 [2024-11-26 04:16:25.247941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:20:23.676 [2024-11-26 04:16:25.247999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.248018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:20:23.676 [2024-11-26 04:16:25.248083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:20:23.676 [2024-11-26 04:16:25.248104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.249128] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:20:23.676 [2024-11-26 04:16:25.251351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.251466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:20:23.676 [2024-11-26 04:16:25.251484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.225 ms 00:20:23.676 [2024-11-26 04:16:25.251492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.251556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.251569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:20:23.676 [2024-11-26 04:16:25.251577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:20:23.676 [2024-11-26 04:16:25.251584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.256157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.256185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:20:23.676 [2024-11-26 04:16:25.256197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.506 ms 00:20:23.676 [2024-11-26 04:16:25.256204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.256246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.256255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:20:23.676 [2024-11-26 04:16:25.256263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:20:23.676 [2024-11-26 04:16:25.256274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.256307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.256317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:20:23.676 [2024-11-26 04:16:25.256328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:20:23.676 [2024-11-26 04:16:25.256338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.256363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:20:23.676 [2024-11-26 04:16:25.257651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.257755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:20:23.676 [2024-11-26 04:16:25.257768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.294 ms 00:20:23.676 [2024-11-26 04:16:25.257775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.257807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.257820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:20:23.676 [2024-11-26 04:16:25.257828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:20:23.676 [2024-11-26 04:16:25.257835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.257856] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:20:23.676 [2024-11-26 04:16:25.257873] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:20:23.676 [2024-11-26 04:16:25.257904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:20:23.676 [2024-11-26 04:16:25.257917] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:20:23.676 [2024-11-26 04:16:25.257993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:20:23.676 [2024-11-26 04:16:25.258003] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:20:23.676 [2024-11-26 04:16:25.258012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:20:23.676 [2024-11-26 04:16:25.258021] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:20:23.676 [2024-11-26 04:16:25.258029] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:20:23.676 [2024-11-26 04:16:25.258037] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:20:23.676 [2024-11-26 04:16:25.258044] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:20:23.676 [2024-11-26 04:16:25.258050] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:20:23.676 [2024-11-26 04:16:25.258057] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:20:23.676 [2024-11-26 04:16:25.258064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.258075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:20:23.676 [2024-11-26 04:16:25.258084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:20:23.676 [2024-11-26 04:16:25.258093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.258153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.676 [2024-11-26 04:16:25.258161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:20:23.676 [2024-11-26 04:16:25.258168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:20:23.676 [2024-11-26 04:16:25.258174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.676 [2024-11-26 04:16:25.258255] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:20:23.676 [2024-11-26 04:16:25.258265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:20:23.676 [2024-11-26 04:16:25.258276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:23.676 [2024-11-26 04:16:25.258285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:20:23.676 [2024-11-26 04:16:25.258299] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:20:23.676 [2024-11-26 04:16:25.258311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:20:23.676 [2024-11-26 04:16:25.258319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:20:23.676 [2024-11-26 04:16:25.258325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:20:23.676 [2024-11-26 04:16:25.258341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:20:23.676 [2024-11-26 04:16:25.258347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:20:23.676 [2024-11-26 04:16:25.258360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.676 [2024-11-26 04:16:25.258373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:20:23.677 [2024-11-26 04:16:25.258379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:20:23.677 [2024-11-26 04:16:25.258384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.677 [2024-11-26 04:16:25.258391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:20:23.677 [2024-11-26 04:16:25.258397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:20:23.677 [2024-11-26 04:16:25.258403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:20:23.677 [2024-11-26 04:16:25.258415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:20:23.677 [2024-11-26 04:16:25.258437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:20:23.677 [2024-11-26 04:16:25.258455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:20:23.677 [2024-11-26 04:16:25.258473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:20:23.677 [2024-11-26 04:16:25.258491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.677 [2024-11-26 04:16:25.258515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:20:23.677 [2024-11-26 04:16:25.258522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.677 [2024-11-26 04:16:25.258534] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:20:23.677 [2024-11-26 04:16:25.258541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:20:23.677 [2024-11-26 04:16:25.258550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:23.677 [2024-11-26 04:16:25.258576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:20:23.677 [2024-11-26 04:16:25.258583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:20:23.677 [2024-11-26 04:16:25.258590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:20:23.677 [2024-11-26 04:16:25.258596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:20:23.677 [2024-11-26 04:16:25.258603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:20:23.677 [2024-11-26 04:16:25.258609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:20:23.677 [2024-11-26 04:16:25.258616] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:20:23.677 [2024-11-26 04:16:25.258625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:20:23.677 [2024-11-26 04:16:25.258640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258652] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:20:23.677 [2024-11-26 04:16:25.258666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:20:23.677 [2024-11-26 04:16:25.258672] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:20:23.677 [2024-11-26 04:16:25.258681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:20:23.677 [2024-11-26 04:16:25.258688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:20:23.677 [2024-11-26 04:16:25.258722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:20:23.677 [2024-11-26 04:16:25.258729] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:20:23.677 [2024-11-26 04:16:25.258736] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258747] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.677 [2024-11-26 04:16:25.258754] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:20:23.677 [2024-11-26 04:16:25.258760] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:20:23.677 [2024-11-26 04:16:25.258767] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:20:23.677 [2024-11-26 04:16:25.258774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.258781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:20:23.677 [2024-11-26 04:16:25.258789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.564 ms 00:20:23.677 [2024-11-26 04:16:25.258798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.264383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.264410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:20:23.677 [2024-11-26 04:16:25.264419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.541 ms 00:20:23.677 [2024-11-26 04:16:25.264429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.264464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.264471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:20:23.677 [2024-11-26 04:16:25.264485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:20:23.677 [2024-11-26 04:16:25.264494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.273174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.273275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:20:23.677 [2024-11-26 04:16:25.273355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.625 ms 00:20:23.677 [2024-11-26 04:16:25.273378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.273415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.273434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:20:23.677 [2024-11-26 04:16:25.273458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:23.677 [2024-11-26 04:16:25.273475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.273968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.274050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:20:23.677 [2024-11-26 04:16:25.274116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:20:23.677 [2024-11-26 04:16:25.274138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.274192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.274250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:20:23.677 [2024-11-26 04:16:25.274273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:20:23.677 [2024-11-26 04:16:25.274294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.279636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.279918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:20:23.677 [2024-11-26 04:16:25.279990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.286 ms 00:20:23.677 [2024-11-26 04:16:25.280012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.282426] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:23.677 [2024-11-26 04:16:25.282553] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:20:23.677 [2024-11-26 04:16:25.282615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.282624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:20:23.677 [2024-11-26 04:16:25.282633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.500 ms 00:20:23.677 [2024-11-26 04:16:25.282639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.286322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.286351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:20:23.677 [2024-11-26 04:16:25.286361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.652 ms 00:20:23.677 [2024-11-26 04:16:25.286369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.287475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.677 [2024-11-26 04:16:25.287523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:20:23.677 [2024-11-26 04:16:25.287532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:20:23.677 [2024-11-26 04:16:25.287538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.677 [2024-11-26 04:16:25.288623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.288727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:20:23.678 [2024-11-26 04:16:25.288739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:20:23.678 [2024-11-26 04:16:25.288746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.288932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.288947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:20:23.678 [2024-11-26 04:16:25.288955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.127 ms 00:20:23.678 [2024-11-26 04:16:25.288964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.306234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.306275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:20:23.678 [2024-11-26 04:16:25.306286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.252 ms 00:20:23.678 [2024-11-26 04:16:25.306293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.313528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:20:23.678 [2024-11-26 04:16:25.314191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.314217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:20:23.678 [2024-11-26 04:16:25.314228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.849 ms 00:20:23.678 [2024-11-26 04:16:25.314240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.314297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.314307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:20:23.678 [2024-11-26 04:16:25.314318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:20:23.678 [2024-11-26 04:16:25.314325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.314364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.314375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:20:23.678 [2024-11-26 04:16:25.314386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:20:23.678 [2024-11-26 04:16:25.314393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.315595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.315622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:20:23.678 [2024-11-26 04:16:25.315630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.184 ms 00:20:23.678 [2024-11-26 04:16:25.315637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.315668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.315676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:20:23.678 [2024-11-26 04:16:25.315685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:20:23.678 [2024-11-26 04:16:25.315693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.315729] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:20:23.678 [2024-11-26 04:16:25.315738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.315745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:20:23.678 [2024-11-26 04:16:25.315752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:20:23.678 [2024-11-26 04:16:25.315759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.318568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.318680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:20:23.678 [2024-11-26 04:16:25.318701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.790 ms 00:20:23.678 [2024-11-26 04:16:25.318710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.318769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.678 [2024-11-26 04:16:25.318778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:20:23.678 [2024-11-26 04:16:25.318786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:20:23.678 [2024-11-26 04:16:25.318792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.678 [2024-11-26 04:16:25.319764] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 72.229 ms, result 0 00:20:23.678 [2024-11-26 04:16:25.334951] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.678 [2024-11-26 04:16:25.350977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:20:23.678 [2024-11-26 04:16:25.359069] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:23.937 04:16:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.937 04:16:25 -- common/autotest_common.sh@862 -- # return 0 00:20:23.937 04:16:25 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:23.937 04:16:25 -- ftl/common.sh@95 -- # return 0 00:20:23.937 04:16:25 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:20:23.937 [2024-11-26 04:16:25.680043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.937 [2024-11-26 04:16:25.680084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:20:23.937 [2024-11-26 04:16:25.680100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:20:23.937 [2024-11-26 04:16:25.680108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.937 [2024-11-26 04:16:25.680130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.937 [2024-11-26 04:16:25.680138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:20:23.937 [2024-11-26 04:16:25.680146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:23.937 [2024-11-26 04:16:25.680153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.937 [2024-11-26 04:16:25.680173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:23.937 [2024-11-26 04:16:25.680180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:20:23.937 [2024-11-26 04:16:25.680191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:23.937 [2024-11-26 04:16:25.680199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:23.937 [2024-11-26 04:16:25.680258] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.206 ms, result 0 00:20:23.937 true 00:20:23.937 04:16:25 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:24.195 { 00:20:24.195 "name": "ftl", 00:20:24.195 "properties": [ 00:20:24.195 { 00:20:24.195 "name": "superblock_version", 00:20:24.195 "value": 5, 00:20:24.195 "read-only": true 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "name": "base_device", 00:20:24.195 "bands": [ 00:20:24.195 { 00:20:24.195 "id": 0, 00:20:24.195 "state": "CLOSED", 00:20:24.195 "validity": 1.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 1, 00:20:24.195 "state": "CLOSED", 00:20:24.195 "validity": 1.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 2, 00:20:24.195 "state": "CLOSED", 00:20:24.195 "validity": 0.007843137254901933 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 3, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 4, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 5, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 6, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 7, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 8, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.195 }, 00:20:24.195 { 00:20:24.195 "id": 9, 00:20:24.195 "state": "FREE", 00:20:24.195 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 10, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 11, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 12, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 13, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 14, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 15, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 16, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 17, 00:20:24.196 "state": "FREE", 00:20:24.196 "validity": 0.0 00:20:24.196 } 00:20:24.196 ], 00:20:24.196 "read-only": true 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "name": "cache_device", 00:20:24.196 "type": "bdev", 00:20:24.196 "chunks": [ 00:20:24.196 { 00:20:24.196 "id": 0, 00:20:24.196 "state": "OPEN", 00:20:24.196 "utilization": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 1, 00:20:24.196 "state": "OPEN", 00:20:24.196 "utilization": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 2, 00:20:24.196 "state": "FREE", 00:20:24.196 "utilization": 0.0 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "id": 3, 00:20:24.196 "state": "FREE", 00:20:24.196 "utilization": 0.0 00:20:24.196 } 00:20:24.196 ], 00:20:24.196 "read-only": true 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "name": "verbose_mode", 00:20:24.196 "value": true, 00:20:24.196 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:20:24.196 }, 00:20:24.196 { 00:20:24.196 "name": "prep_upgrade_on_shutdown", 00:20:24.196 "value": false, 00:20:24.196 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:20:24.196 } 00:20:24.196 ] 00:20:24.196 } 00:20:24.196 04:16:25 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:20:24.196 04:16:25 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:20:24.196 04:16:25 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:24.454 04:16:26 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:20:24.454 04:16:26 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:20:24.454 04:16:26 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:20:24.454 04:16:26 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:20:24.454 04:16:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:24.712 Validate MD5 checksum, iteration 1 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:20:24.712 04:16:26 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:24.712 04:16:26 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:24.712 04:16:26 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:24.712 04:16:26 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:24.712 04:16:26 -- ftl/common.sh@154 -- # return 0 00:20:24.712 04:16:26 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:24.712 [2024-11-26 04:16:26.330178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.712 [2024-11-26 04:16:26.330291] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87143 ] 00:20:24.969 [2024-11-26 04:16:26.478345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.969 [2024-11-26 04:16:26.509280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.342  [2024-11-26T04:16:28.368Z] Copying: 696/1024 [MB] (696 MBps) [2024-11-26T04:16:28.934Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:20:27.166 00:20:27.425 04:16:28 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:20:27.425 04:16:28 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@103 -- # sum=ac476747a76b8d5cc05d41c38863383c 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@105 -- # [[ ac476747a76b8d5cc05d41c38863383c != \a\c\4\7\6\7\4\7\a\7\6\b\8\d\5\c\c\0\5\d\4\1\c\3\8\8\6\3\3\8\3\c ]] 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:20:29.325 Validate MD5 checksum, iteration 2 00:20:29.325 04:16:31 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:29.325 04:16:31 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:29.325 04:16:31 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:29.325 04:16:31 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:29.325 04:16:31 -- ftl/common.sh@154 -- # return 0 00:20:29.325 04:16:31 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:29.325 [2024-11-26 04:16:31.065264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:29.325 [2024-11-26 04:16:31.065988] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87199 ] 00:20:29.582 [2024-11-26 04:16:31.214409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.582 [2024-11-26 04:16:31.244585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.003  [2024-11-26T04:16:33.340Z] Copying: 664/1024 [MB] (664 MBps) [2024-11-26T04:16:33.907Z] Copying: 1024/1024 [MB] (average 635 MBps) 00:20:32.139 00:20:32.139 04:16:33 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:20:32.139 04:16:33 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@103 -- # sum=4c60ae80fb7a9da2cdd657fbdcb6a07f 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@105 -- # [[ 4c60ae80fb7a9da2cdd657fbdcb6a07f != \4\c\6\0\a\e\8\0\f\b\7\a\9\d\a\2\c\d\d\6\5\7\f\b\d\c\b\6\a\0\7\f ]] 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:20:34.041 04:16:35 -- ftl/common.sh@137 -- # [[ -n 87112 ]] 00:20:34.041 04:16:35 -- ftl/common.sh@138 -- # kill -9 87112 00:20:34.041 04:16:35 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:20:34.041 04:16:35 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:20:34.041 04:16:35 -- ftl/common.sh@81 -- # local base_bdev= 00:20:34.041 04:16:35 -- ftl/common.sh@82 -- # local cache_bdev= 00:20:34.041 04:16:35 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:34.041 04:16:35 -- ftl/common.sh@89 -- # spdk_tgt_pid=87255 00:20:34.041 04:16:35 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:34.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.041 04:16:35 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:20:34.041 04:16:35 -- ftl/common.sh@91 -- # waitforlisten 87255 00:20:34.041 04:16:35 -- common/autotest_common.sh@829 -- # '[' -z 87255 ']' 00:20:34.041 04:16:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.041 04:16:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.041 04:16:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.041 04:16:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.041 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:20:34.299 [2024-11-26 04:16:35.858488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:34.299 [2024-11-26 04:16:35.858605] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87255 ] 00:20:34.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 87112 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:20:34.299 [2024-11-26 04:16:36.003480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.299 [2024-11-26 04:16:36.032003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:34.299 [2024-11-26 04:16:36.032351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.558 [2024-11-26 04:16:36.261800] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:20:34.558 [2024-11-26 04:16:36.261851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:20:34.817 [2024-11-26 04:16:36.394019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.394067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:20:34.817 [2024-11-26 04:16:36.394081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:20:34.817 [2024-11-26 04:16:36.394087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.394126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.394134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:20:34.817 [2024-11-26 04:16:36.394141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:20:34.817 [2024-11-26 04:16:36.394147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.394161] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:20:34.817 [2024-11-26 04:16:36.394331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:20:34.817 [2024-11-26 04:16:36.394346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.394352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:20:34.817 [2024-11-26 04:16:36.394358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:20:34.817 [2024-11-26 04:16:36.394364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.394576] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:20:34.817 [2024-11-26 04:16:36.397739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.397771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:20:34.817 [2024-11-26 04:16:36.397783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.163 ms 00:20:34.817 [2024-11-26 04:16:36.397789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.398571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.398596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:20:34.817 [2024-11-26 04:16:36.398604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:20:34.817 [2024-11-26 04:16:36.398613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.398831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.398839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:20:34.817 [2024-11-26 04:16:36.398848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:20:34.817 [2024-11-26 04:16:36.398854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.398879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.398891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:20:34.817 [2024-11-26 04:16:36.398897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:20:34.817 [2024-11-26 04:16:36.398903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.398921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.398927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:20:34.817 [2024-11-26 04:16:36.398934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:20:34.817 [2024-11-26 04:16:36.398940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.398956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:20:34.817 [2024-11-26 04:16:36.399668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.399680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:20:34.817 [2024-11-26 04:16:36.399687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.716 ms 00:20:34.817 [2024-11-26 04:16:36.399692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.399712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.399720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:20:34.817 [2024-11-26 04:16:36.399732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:20:34.817 [2024-11-26 04:16:36.399739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.399758] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:20:34.817 [2024-11-26 04:16:36.399771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:20:34.817 [2024-11-26 04:16:36.399796] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:20:34.817 [2024-11-26 04:16:36.399808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:20:34.817 [2024-11-26 04:16:36.399864] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:20:34.817 [2024-11-26 04:16:36.399872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:20:34.817 [2024-11-26 04:16:36.399879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:20:34.817 [2024-11-26 04:16:36.399887] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:20:34.817 [2024-11-26 04:16:36.399894] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:20:34.817 [2024-11-26 04:16:36.399900] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:20:34.817 [2024-11-26 04:16:36.399905] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:20:34.817 [2024-11-26 04:16:36.399910] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:20:34.817 [2024-11-26 04:16:36.399916] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:20:34.817 [2024-11-26 04:16:36.399922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.399930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:20:34.817 [2024-11-26 04:16:36.399939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:20:34.817 [2024-11-26 04:16:36.399946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.399994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.817 [2024-11-26 04:16:36.400000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:20:34.817 [2024-11-26 04:16:36.400006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:20:34.817 [2024-11-26 04:16:36.400011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.817 [2024-11-26 04:16:36.400067] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:20:34.817 [2024-11-26 04:16:36.400074] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:20:34.817 [2024-11-26 04:16:36.400084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:34.817 [2024-11-26 04:16:36.400092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.817 [2024-11-26 04:16:36.400098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:20:34.817 [2024-11-26 04:16:36.400103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:20:34.817 [2024-11-26 04:16:36.400108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:20:34.817 [2024-11-26 04:16:36.400114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:20:34.817 [2024-11-26 04:16:36.400119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:20:34.818 [2024-11-26 04:16:36.400126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:20:34.818 [2024-11-26 04:16:36.400136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:20:34.818 [2024-11-26 04:16:36.400141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:20:34.818 [2024-11-26 04:16:36.400151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:20:34.818 [2024-11-26 04:16:36.400166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:20:34.818 [2024-11-26 04:16:36.400170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:20:34.818 [2024-11-26 04:16:36.400180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:20:34.818 [2024-11-26 04:16:36.400185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:20:34.818 [2024-11-26 04:16:36.400194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:20:34.818 [2024-11-26 04:16:36.400210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:20:34.818 [2024-11-26 04:16:36.400226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:20:34.818 [2024-11-26 04:16:36.400239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:20:34.818 [2024-11-26 04:16:36.400254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:20:34.818 [2024-11-26 04:16:36.400268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400279] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:20:34.818 [2024-11-26 04:16:36.400286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:20:34.818 [2024-11-26 04:16:36.400292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:20:34.818 [2024-11-26 04:16:36.400303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:20:34.818 [2024-11-26 04:16:36.400308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:20:34.818 [2024-11-26 04:16:36.400313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:20:34.818 [2024-11-26 04:16:36.400319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:20:34.818 [2024-11-26 04:16:36.400325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:20:34.818 [2024-11-26 04:16:36.400331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:20:34.818 [2024-11-26 04:16:36.400337] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:20:34.818 [2024-11-26 04:16:36.400348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:20:34.818 [2024-11-26 04:16:36.400366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:20:34.818 [2024-11-26 04:16:36.400384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:20:34.818 [2024-11-26 04:16:36.400392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:20:34.818 [2024-11-26 04:16:36.400398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:20:34.818 [2024-11-26 04:16:36.400404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:20:34.818 [2024-11-26 04:16:36.400436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:20:34.818 [2024-11-26 04:16:36.400441] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:20:34.818 [2024-11-26 04:16:36.400449] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400456] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:34.818 [2024-11-26 04:16:36.400462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:20:34.818 [2024-11-26 04:16:36.400468] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:20:34.818 [2024-11-26 04:16:36.400474] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:20:34.818 [2024-11-26 04:16:36.400481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.818 [2024-11-26 04:16:36.400488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:20:34.818 [2024-11-26 04:16:36.400498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:20:34.818 [2024-11-26 04:16:36.400515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.818 [2024-11-26 04:16:36.404673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.818 [2024-11-26 04:16:36.404770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:20:34.818 [2024-11-26 04:16:36.404812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.127 ms 00:20:34.818 [2024-11-26 04:16:36.404830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.818 [2024-11-26 04:16:36.404868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.818 [2024-11-26 04:16:36.405034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:20:34.818 [2024-11-26 04:16:36.405054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:20:34.818 [2024-11-26 04:16:36.405073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.818 [2024-11-26 04:16:36.412593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.818 [2024-11-26 04:16:36.412687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:20:34.819 [2024-11-26 04:16:36.412725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.482 ms 00:20:34.819 [2024-11-26 04:16:36.412741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.412777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.412793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:20:34.819 [2024-11-26 04:16:36.412906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:34.819 [2024-11-26 04:16:36.412924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.413001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.413147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:20:34.819 [2024-11-26 04:16:36.413165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:20:34.819 [2024-11-26 04:16:36.413180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.413226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.413291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:20:34.819 [2024-11-26 04:16:36.413309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:20:34.819 [2024-11-26 04:16:36.413326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.417978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.418060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:20:34.819 [2024-11-26 04:16:36.418104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.626 ms 00:20:34.819 [2024-11-26 04:16:36.418121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.418195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.418317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:20:34.819 [2024-11-26 04:16:36.418341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:20:34.819 [2024-11-26 04:16:36.418356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.421479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.421587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:20:34.819 [2024-11-26 04:16:36.421697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.101 ms 00:20:34.819 [2024-11-26 04:16:36.421717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.422553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.422627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:20:34.819 [2024-11-26 04:16:36.422671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.107 ms 00:20:34.819 [2024-11-26 04:16:36.422689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.437211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.437330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:20:34.819 [2024-11-26 04:16:36.437371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.482 ms 00:20:34.819 [2024-11-26 04:16:36.437390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.437473] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:20:34.819 [2024-11-26 04:16:36.437658] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:20:34.819 [2024-11-26 04:16:36.437764] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:20:34.819 [2024-11-26 04:16:36.437862] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:20:34.819 [2024-11-26 04:16:36.437885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.437900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:20:34.819 [2024-11-26 04:16:36.437919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:20:34.819 [2024-11-26 04:16:36.437957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.438017] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:20:34.819 [2024-11-26 04:16:36.438046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.438104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:20:34.819 [2024-11-26 04:16:36.438123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:20:34.819 [2024-11-26 04:16:36.438141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.440201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.440288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:20:34.819 [2024-11-26 04:16:36.440333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.033 ms 00:20:34.819 [2024-11-26 04:16:36.440353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.440838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.440908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:20:34.819 [2024-11-26 04:16:36.440944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:20:34.819 [2024-11-26 04:16:36.440960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.440991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:34.819 [2024-11-26 04:16:36.441032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:20:34.819 [2024-11-26 04:16:36.441049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:20:34.819 [2024-11-26 04:16:36.441063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:34.819 [2024-11-26 04:16:36.441214] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:20:35.386 [2024-11-26 04:16:36.854551] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:20:35.386 [2024-11-26 04:16:36.854863] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:20:35.644 [2024-11-26 04:16:37.276781] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:20:35.644 [2024-11-26 04:16:37.277009] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.644 [2024-11-26 04:16:37.277031] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:20:35.644 [2024-11-26 04:16:37.277041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.277050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:20:35.644 [2024-11-26 04:16:37.277061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 835.931 ms 00:20:35.644 [2024-11-26 04:16:37.277069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.277105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.277117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:20:35.644 [2024-11-26 04:16:37.277125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:20:35.644 [2024-11-26 04:16:37.277132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.284865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:20:35.644 [2024-11-26 04:16:37.284963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.284973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:20:35.644 [2024-11-26 04:16:37.284982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.815 ms 00:20:35.644 [2024-11-26 04:16:37.284993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.285690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.285833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:20:35.644 [2024-11-26 04:16:37.285848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:20:35.644 [2024-11-26 04:16:37.285855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.288109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.288129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:20:35.644 [2024-11-26 04:16:37.288139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.232 ms 00:20:35.644 [2024-11-26 04:16:37.288150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.291580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.291610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:20:35.644 [2024-11-26 04:16:37.291619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.408 ms 00:20:35.644 [2024-11-26 04:16:37.291627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.291711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.291725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:20:35.644 [2024-11-26 04:16:37.291734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:20:35.644 [2024-11-26 04:16:37.291741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.292936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.293054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:20:35.644 [2024-11-26 04:16:37.293114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.174 ms 00:20:35.644 [2024-11-26 04:16:37.293168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.293220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.293272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:20:35.644 [2024-11-26 04:16:37.293322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:20:35.644 [2024-11-26 04:16:37.293344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.293436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:20:35.644 [2024-11-26 04:16:37.293472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.293544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:20:35.644 [2024-11-26 04:16:37.293569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:20:35.644 [2024-11-26 04:16:37.293608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.293686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:35.644 [2024-11-26 04:16:37.293711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:20:35.644 [2024-11-26 04:16:37.293755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:20:35.644 [2024-11-26 04:16:37.293776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:35.644 [2024-11-26 04:16:37.294632] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 900.219 ms, result 0 00:20:35.644 [2024-11-26 04:16:37.309441] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.644 [2024-11-26 04:16:37.325481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:20:35.644 [2024-11-26 04:16:37.333576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:35.644 04:16:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.644 04:16:37 -- common/autotest_common.sh@862 -- # return 0 00:20:35.644 04:16:37 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:20:35.644 04:16:37 -- ftl/common.sh@95 -- # return 0 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:20:35.644 Validate MD5 checksum, iteration 1 00:20:35.644 04:16:37 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:35.644 04:16:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:35.644 04:16:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:35.644 04:16:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:35.644 04:16:37 -- ftl/common.sh@154 -- # return 0 00:20:35.644 04:16:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:20:35.902 [2024-11-26 04:16:37.427472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:35.902 [2024-11-26 04:16:37.427696] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87281 ] 00:20:35.902 [2024-11-26 04:16:37.574632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.902 [2024-11-26 04:16:37.603891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.274  [2024-11-26T04:16:39.607Z] Copying: 691/1024 [MB] (691 MBps) [2024-11-26T04:16:40.172Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:20:38.404 00:20:38.404 04:16:40 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:20:38.404 04:16:40 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:20:40.306 Validate MD5 checksum, iteration 2 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@103 -- # sum=ac476747a76b8d5cc05d41c38863383c 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@105 -- # [[ ac476747a76b8d5cc05d41c38863383c != \a\c\4\7\6\7\4\7\a\7\6\b\8\d\5\c\c\0\5\d\4\1\c\3\8\8\6\3\3\8\3\c ]] 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:20:40.306 04:16:41 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:40.306 04:16:41 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:20:40.306 04:16:41 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:20:40.306 04:16:41 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:20:40.306 04:16:41 -- ftl/common.sh@154 -- # return 0 00:20:40.306 04:16:41 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:20:40.306 [2024-11-26 04:16:41.998436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.306 [2024-11-26 04:16:41.998703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87338 ] 00:20:40.565 [2024-11-26 04:16:42.146403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.565 [2024-11-26 04:16:42.175851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.940  [2024-11-26T04:16:43.967Z] Copying: 705/1024 [MB] (705 MBps) [2024-11-26T04:16:45.869Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:20:44.101 00:20:44.101 04:16:45 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:20:44.101 04:16:45 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:46.003 04:16:47 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:20:46.003 04:16:47 -- ftl/upgrade_shutdown.sh@103 -- # sum=4c60ae80fb7a9da2cdd657fbdcb6a07f 00:20:46.003 04:16:47 -- ftl/upgrade_shutdown.sh@105 -- # [[ 4c60ae80fb7a9da2cdd657fbdcb6a07f != \4\c\6\0\a\e\8\0\f\b\7\a\9\d\a\2\c\d\d\6\5\7\f\b\d\c\b\6\a\0\7\f ]] 00:20:46.003 04:16:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:20:46.004 04:16:47 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:20:46.004 04:16:47 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:20:46.004 04:16:47 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:20:46.004 04:16:47 -- ftl/common.sh@130 -- # [[ -n 87255 ]] 00:20:46.004 04:16:47 -- ftl/common.sh@131 -- # killprocess 87255 00:20:46.004 04:16:47 -- common/autotest_common.sh@936 -- # '[' -z 87255 ']' 00:20:46.004 04:16:47 -- common/autotest_common.sh@940 -- # kill -0 87255 00:20:46.004 04:16:47 -- common/autotest_common.sh@941 -- # uname 00:20:46.004 04:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.004 04:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87255 00:20:46.004 killing process with pid 87255 00:20:46.004 04:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:46.004 04:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:46.004 04:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87255' 00:20:46.004 04:16:47 -- common/autotest_common.sh@955 -- # kill 87255 00:20:46.004 04:16:47 -- common/autotest_common.sh@960 -- # wait 87255 00:20:46.004 [2024-11-26 04:16:47.692889] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:20:46.004 [2024-11-26 04:16:47.696869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.696904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:20:46.004 [2024-11-26 04:16:47.696914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:20:46.004 [2024-11-26 04:16:47.696924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.696941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:20:46.004 [2024-11-26 04:16:47.697318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.697337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:20:46.004 [2024-11-26 04:16:47.697345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.366 ms 00:20:46.004 [2024-11-26 04:16:47.697351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.697551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.697559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:20:46.004 [2024-11-26 04:16:47.697565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:20:46.004 [2024-11-26 04:16:47.697571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.698511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.698534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:20:46.004 [2024-11-26 04:16:47.698541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:20:46.004 [2024-11-26 04:16:47.698547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.699404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.699422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:20:46.004 [2024-11-26 04:16:47.699429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.830 ms 00:20:46.004 [2024-11-26 04:16:47.699436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.700725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.700756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:20:46.004 [2024-11-26 04:16:47.700763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.260 ms 00:20:46.004 [2024-11-26 04:16:47.700769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.701892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.701920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:20:46.004 [2024-11-26 04:16:47.701927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:20:46.004 [2024-11-26 04:16:47.701933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.701995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.702006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:20:46.004 [2024-11-26 04:16:47.702012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:20:46.004 [2024-11-26 04:16:47.702017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.703220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.703247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:20:46.004 [2024-11-26 04:16:47.703254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:20:46.004 [2024-11-26 04:16:47.703259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.704433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.704551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:20:46.004 [2024-11-26 04:16:47.704563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:20:46.004 [2024-11-26 04:16:47.704569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.705601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.705621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:20:46.004 [2024-11-26 04:16:47.705627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.006 ms 00:20:46.004 [2024-11-26 04:16:47.705632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.706631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.706657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:20:46.004 [2024-11-26 04:16:47.706664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:20:46.004 [2024-11-26 04:16:47.706670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.706695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:20:46.004 [2024-11-26 04:16:47.706706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:46.004 [2024-11-26 04:16:47.706714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:20:46.004 [2024-11-26 04:16:47.706727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:20:46.004 [2024-11-26 04:16:47.706734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.004 [2024-11-26 04:16:47.706821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:20:46.004 [2024-11-26 04:16:47.706827] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c0590d80-07f1-4bef-bc8c-dfb33ce726e5 00:20:46.004 [2024-11-26 04:16:47.706833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:20:46.004 [2024-11-26 04:16:47.706841] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:20:46.004 [2024-11-26 04:16:47.706846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:20:46.004 [2024-11-26 04:16:47.706852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:20:46.004 [2024-11-26 04:16:47.706861] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:20:46.004 [2024-11-26 04:16:47.706867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:20:46.004 [2024-11-26 04:16:47.706873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:20:46.004 [2024-11-26 04:16:47.706877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:20:46.004 [2024-11-26 04:16:47.706882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:20:46.004 [2024-11-26 04:16:47.706888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.706894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:20:46.004 [2024-11-26 04:16:47.706900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:20:46.004 [2024-11-26 04:16:47.706906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.708109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.708204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:20:46.004 [2024-11-26 04:16:47.708221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:20:46.004 [2024-11-26 04:16:47.708227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.004 [2024-11-26 04:16:47.708273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:20:46.004 [2024-11-26 04:16:47.708279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:20:46.004 [2024-11-26 04:16:47.708286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:20:46.005 [2024-11-26 04:16:47.708294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.713010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.713096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:20:46.005 [2024-11-26 04:16:47.713145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.713162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.713196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.713295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:20:46.005 [2024-11-26 04:16:47.713315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.713329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.713394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.713460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:20:46.005 [2024-11-26 04:16:47.713482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.713511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.713536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.713554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:20:46.005 [2024-11-26 04:16:47.713569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.713616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.721864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.722143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:20:46.005 [2024-11-26 04:16:47.722293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.722312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.725454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.725565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:20:46.005 [2024-11-26 04:16:47.725609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.725628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.725756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.725780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:20:46.005 [2024-11-26 04:16:47.725797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.725816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.725862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.725950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:20:46.005 [2024-11-26 04:16:47.725970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.725985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.726051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.726166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:20:46.005 [2024-11-26 04:16:47.726185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.726200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.726248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.726302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:20:46.005 [2024-11-26 04:16:47.726321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.726336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.726377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.726397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:20:46.005 [2024-11-26 04:16:47.726440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.726495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.726572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:20:46.005 [2024-11-26 04:16:47.726666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:20:46.005 [2024-11-26 04:16:47.726685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:20:46.005 [2024-11-26 04:16:47.726700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:20:46.005 [2024-11-26 04:16:47.726820] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 29.920 ms, result 0 00:20:46.264 04:16:47 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:20:46.264 04:16:47 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:46.264 04:16:47 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:20:46.264 04:16:47 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:20:46.264 04:16:47 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:20:46.264 04:16:47 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:46.264 04:16:47 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:20:46.264 04:16:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:46.264 Remove shared memory files 00:20:46.264 04:16:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:20:46.264 04:16:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:20:46.264 04:16:47 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid87112 00:20:46.264 04:16:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:46.264 04:16:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:20:46.264 ************************************ 00:20:46.264 END TEST ftl_upgrade_shutdown 00:20:46.264 ************************************ 00:20:46.264 00:20:46.264 real 1m2.528s 00:20:46.264 user 1m27.227s 00:20:46.264 sys 0m16.874s 00:20:46.264 04:16:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.264 04:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:46.264 04:16:47 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:20:46.264 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:20:46.264 04:16:47 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:20:46.264 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:20:46.264 04:16:47 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:20:46.264 04:16:47 -- ftl/ftl.sh@14 -- # killprocess 81868 00:20:46.264 Process with pid 81868 is not found 00:20:46.264 04:16:47 -- common/autotest_common.sh@936 -- # '[' -z 81868 ']' 00:20:46.264 04:16:47 -- common/autotest_common.sh@940 -- # kill -0 81868 00:20:46.264 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81868) - No such process 00:20:46.264 04:16:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81868 is not found' 00:20:46.264 04:16:47 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:20:46.264 04:16:47 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87436 00:20:46.264 04:16:47 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.264 04:16:47 -- ftl/ftl.sh@20 -- # waitforlisten 87436 00:20:46.264 04:16:47 -- common/autotest_common.sh@829 -- # '[' -z 87436 ']' 00:20:46.264 04:16:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.264 04:16:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.264 04:16:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.264 04:16:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.264 04:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:46.264 [2024-11-26 04:16:47.983154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:46.264 [2024-11-26 04:16:47.983264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87436 ] 00:20:46.554 [2024-11-26 04:16:48.124318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.554 [2024-11-26 04:16:48.152364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:46.554 [2024-11-26 04:16:48.152542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.158 04:16:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.158 04:16:48 -- common/autotest_common.sh@862 -- # return 0 00:20:47.158 04:16:48 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:20:47.416 nvme0n1 00:20:47.416 04:16:48 -- ftl/ftl.sh@22 -- # clear_lvols 00:20:47.416 04:16:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:47.416 04:16:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:47.673 04:16:49 -- ftl/common.sh@28 -- # stores=d13057d6-7ca3-41be-be83-ff5bbd5f0f0e 00:20:47.673 04:16:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:47.673 04:16:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d13057d6-7ca3-41be-be83-ff5bbd5f0f0e 00:20:47.673 04:16:49 -- ftl/ftl.sh@23 -- # killprocess 87436 00:20:47.673 04:16:49 -- common/autotest_common.sh@936 -- # '[' -z 87436 ']' 00:20:47.673 04:16:49 -- common/autotest_common.sh@940 -- # kill -0 87436 00:20:47.673 04:16:49 -- common/autotest_common.sh@941 -- # uname 00:20:47.673 04:16:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.673 04:16:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87436 00:20:47.673 04:16:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:47.673 killing process with pid 87436 00:20:47.673 04:16:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:47.673 04:16:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87436' 00:20:47.673 04:16:49 -- common/autotest_common.sh@955 -- # kill 87436 00:20:47.673 04:16:49 -- common/autotest_common.sh@960 -- # wait 87436 00:20:47.930 04:16:49 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:48.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.188 Waiting for block devices as requested 00:20:48.188 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:20:48.446 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:20:48.446 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:48.446 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.720 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:20:53.720 04:16:55 -- ftl/ftl.sh@28 -- # remove_shm 00:20:53.720 Remove shared memory files 00:20:53.720 04:16:55 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:53.720 04:16:55 -- ftl/common.sh@205 -- # rm -f rm -f 00:20:53.720 04:16:55 -- ftl/common.sh@206 -- # rm -f rm -f 00:20:53.720 04:16:55 -- ftl/common.sh@207 -- # rm -f rm -f 00:20:53.720 04:16:55 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:53.720 04:16:55 -- ftl/common.sh@209 -- # rm -f rm -f 00:20:53.720 00:20:53.720 real 7m19.917s 00:20:53.720 user 9m10.672s 00:20:53.720 sys 1m0.091s 00:20:53.720 04:16:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:53.720 04:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:53.720 ************************************ 00:20:53.720 END TEST ftl 00:20:53.720 ************************************ 00:20:53.720 04:16:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:53.720 04:16:55 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:53.720 04:16:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:53.720 04:16:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:53.720 04:16:55 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:20:53.720 04:16:55 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:20:53.720 04:16:55 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:53.720 04:16:55 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:53.720 04:16:55 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:20:53.720 04:16:55 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:20:53.720 04:16:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.720 04:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:53.720 04:16:55 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:20:53.720 04:16:55 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:20:53.720 04:16:55 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:20:53.720 04:16:55 -- common/autotest_common.sh@10 -- # set +x 00:20:54.658 INFO: APP EXITING 00:20:54.658 INFO: killing all VMs 00:20:54.658 INFO: killing vhost app 00:20:54.658 INFO: EXIT DONE 00:20:55.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.491 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:20:55.491 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:20:55.491 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:55.491 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:56.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.062 Cleaning 00:20:56.062 Removing: /var/run/dpdk/spdk0/config 00:20:56.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:56.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:56.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:56.062 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:56.062 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:56.062 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:56.062 Removing: /var/run/dpdk/spdk0 00:20:56.062 Removing: /var/run/dpdk/spdk_pid68389 00:20:56.062 Removing: /var/run/dpdk/spdk_pid68557 00:20:56.062 Removing: /var/run/dpdk/spdk_pid68851 00:20:56.062 Removing: /var/run/dpdk/spdk_pid68929 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69002 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69101 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69175 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69214 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69251 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69315 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69410 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69823 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69865 00:20:56.062 Removing: /var/run/dpdk/spdk_pid69917 00:20:56.324 Removing: /var/run/dpdk/spdk_pid69928 00:20:56.324 Removing: /var/run/dpdk/spdk_pid69986 00:20:56.324 Removing: /var/run/dpdk/spdk_pid69999 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70054 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70070 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70112 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70130 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70172 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70190 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70316 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70347 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70430 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70483 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70503 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70570 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70585 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70621 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70640 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70671 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70692 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70722 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70742 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70778 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70793 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70828 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70849 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70879 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70899 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70935 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70950 00:20:56.324 Removing: /var/run/dpdk/spdk_pid70991 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71006 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71038 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71064 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71094 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71109 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71150 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71165 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71195 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71216 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71251 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71266 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71305 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71325 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71355 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71376 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71411 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71429 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71468 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71491 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71524 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71545 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71580 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71601 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71637 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71715 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71811 00:20:56.324 Removing: /var/run/dpdk/spdk_pid71970 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72043 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72068 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72488 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72694 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72798 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72834 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72865 00:20:56.324 Removing: /var/run/dpdk/spdk_pid72937 00:20:56.324 Removing: /var/run/dpdk/spdk_pid73572 00:20:56.324 Removing: /var/run/dpdk/spdk_pid73602 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74057 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74155 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74264 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74295 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74326 00:20:56.324 Removing: /var/run/dpdk/spdk_pid74346 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76234 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76355 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76359 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76376 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76428 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76432 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76444 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76505 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76515 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76527 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76571 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76575 00:20:56.324 Removing: /var/run/dpdk/spdk_pid76587 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78015 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78100 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78217 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78283 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78345 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78399 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78479 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78549 00:20:56.324 Removing: /var/run/dpdk/spdk_pid78679 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79048 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79074 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79505 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79682 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79764 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79863 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79902 00:20:56.324 Removing: /var/run/dpdk/spdk_pid79928 00:20:56.324 Removing: /var/run/dpdk/spdk_pid80482 00:20:56.324 Removing: /var/run/dpdk/spdk_pid80509 00:20:56.324 Removing: /var/run/dpdk/spdk_pid80565 00:20:56.324 Removing: /var/run/dpdk/spdk_pid80928 00:20:56.324 Removing: /var/run/dpdk/spdk_pid81072 00:20:56.324 Removing: /var/run/dpdk/spdk_pid81868 00:20:56.586 Removing: /var/run/dpdk/spdk_pid81988 00:20:56.586 Removing: /var/run/dpdk/spdk_pid82154 00:20:56.586 Removing: /var/run/dpdk/spdk_pid82230 00:20:56.586 Removing: /var/run/dpdk/spdk_pid82499 00:20:56.586 Removing: /var/run/dpdk/spdk_pid82720 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83053 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83247 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83344 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83380 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83457 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83471 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83507 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83654 00:20:56.586 Removing: /var/run/dpdk/spdk_pid83898 00:20:56.586 Removing: /var/run/dpdk/spdk_pid84148 00:20:56.586 Removing: /var/run/dpdk/spdk_pid84403 00:20:56.586 Removing: /var/run/dpdk/spdk_pid84672 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85002 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85127 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85200 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85548 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85595 00:20:56.586 Removing: /var/run/dpdk/spdk_pid85885 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86141 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86616 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86732 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86764 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86822 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86871 00:20:56.586 Removing: /var/run/dpdk/spdk_pid86918 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87112 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87143 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87199 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87255 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87281 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87338 00:20:56.586 Removing: /var/run/dpdk/spdk_pid87436 00:20:56.586 Clean 00:20:56.586 killing process with pid 60585 00:20:56.586 killing process with pid 60587 00:20:56.586 04:16:58 -- common/autotest_common.sh@1446 -- # return 0 00:20:56.586 04:16:58 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:20:56.586 04:16:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.586 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.586 04:16:58 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:20:56.586 04:16:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.586 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:20:56.848 04:16:58 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:56.848 04:16:58 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:56.848 04:16:58 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:56.848 04:16:58 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:20:56.848 04:16:58 -- spdk/autotest.sh@383 -- # hostname 00:20:56.848 04:16:58 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:56.848 geninfo: WARNING: invalid characters removed from testname! 00:21:18.958 04:17:20 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.251 04:17:23 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:24.161 04:17:25 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:26.068 04:17:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:27.985 04:17:29 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:29.895 04:17:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:32.442 04:17:33 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:32.442 04:17:33 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:21:32.442 04:17:33 -- common/autotest_common.sh@1690 -- $ lcov --version 00:21:32.443 04:17:33 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:21:32.443 04:17:33 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:21:32.443 04:17:33 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:21:32.443 04:17:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:21:32.443 04:17:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:21:32.443 04:17:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:21:32.443 04:17:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:21:32.443 04:17:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:32.443 04:17:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:21:32.443 04:17:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:21:32.443 04:17:33 -- scripts/common.sh@339 -- $ ver1_l=2 00:21:32.443 04:17:33 -- scripts/common.sh@340 -- $ ver2_l=1 00:21:32.443 04:17:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:21:32.443 04:17:33 -- scripts/common.sh@343 -- $ case "$op" in 00:21:32.443 04:17:33 -- scripts/common.sh@344 -- $ : 1 00:21:32.443 04:17:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:21:32.443 04:17:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.443 04:17:33 -- scripts/common.sh@364 -- $ decimal 1 00:21:32.443 04:17:33 -- scripts/common.sh@352 -- $ local d=1 00:21:32.443 04:17:33 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:32.443 04:17:33 -- scripts/common.sh@354 -- $ echo 1 00:21:32.443 04:17:33 -- scripts/common.sh@364 -- $ ver1[v]=1 00:21:32.443 04:17:33 -- scripts/common.sh@365 -- $ decimal 2 00:21:32.443 04:17:33 -- scripts/common.sh@352 -- $ local d=2 00:21:32.443 04:17:33 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:32.443 04:17:33 -- scripts/common.sh@354 -- $ echo 2 00:21:32.443 04:17:33 -- scripts/common.sh@365 -- $ ver2[v]=2 00:21:32.443 04:17:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:21:32.443 04:17:33 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:21:32.443 04:17:33 -- scripts/common.sh@367 -- $ return 0 00:21:32.443 04:17:33 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.443 04:17:33 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:21:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.443 --rc genhtml_branch_coverage=1 00:21:32.443 --rc genhtml_function_coverage=1 00:21:32.443 --rc genhtml_legend=1 00:21:32.443 --rc geninfo_all_blocks=1 00:21:32.443 --rc geninfo_unexecuted_blocks=1 00:21:32.443 00:21:32.443 ' 00:21:32.443 04:17:33 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:21:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.443 --rc genhtml_branch_coverage=1 00:21:32.443 --rc genhtml_function_coverage=1 00:21:32.443 --rc genhtml_legend=1 00:21:32.443 --rc geninfo_all_blocks=1 00:21:32.443 --rc geninfo_unexecuted_blocks=1 00:21:32.443 00:21:32.443 ' 00:21:32.443 04:17:33 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:21:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.443 --rc genhtml_branch_coverage=1 00:21:32.443 --rc genhtml_function_coverage=1 00:21:32.443 --rc genhtml_legend=1 00:21:32.443 --rc geninfo_all_blocks=1 00:21:32.443 --rc geninfo_unexecuted_blocks=1 00:21:32.443 00:21:32.443 ' 00:21:32.443 04:17:33 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:21:32.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.443 --rc genhtml_branch_coverage=1 00:21:32.443 --rc genhtml_function_coverage=1 00:21:32.443 --rc genhtml_legend=1 00:21:32.443 --rc geninfo_all_blocks=1 00:21:32.443 --rc geninfo_unexecuted_blocks=1 00:21:32.443 00:21:32.443 ' 00:21:32.443 04:17:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.443 04:17:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:32.443 04:17:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.443 04:17:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.443 04:17:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.443 04:17:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.443 04:17:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.443 04:17:33 -- paths/export.sh@5 -- $ export PATH 00:21:32.443 04:17:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.443 04:17:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:32.443 04:17:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:21:32.443 04:17:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732594653.XXXXXX 00:21:32.443 04:17:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732594653.h0b9hA 00:21:32.443 04:17:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:21:32.443 04:17:33 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:21:32.443 04:17:33 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:32.443 04:17:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:32.443 04:17:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:32.443 04:17:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:32.443 04:17:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:21:32.443 04:17:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:21:32.443 04:17:33 -- common/autotest_common.sh@10 -- $ set +x 00:21:32.443 04:17:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:21:32.443 04:17:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:32.443 04:17:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:32.443 04:17:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:32.443 04:17:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:32.443 04:17:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:32.443 04:17:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:32.443 04:17:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:32.443 04:17:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:32.443 04:17:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:32.443 04:17:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:32.443 + [[ -n 5712 ]] 00:21:32.443 + sudo kill 5712 00:21:32.454 [Pipeline] } 00:21:32.473 [Pipeline] // timeout 00:21:32.480 [Pipeline] } 00:21:32.495 [Pipeline] // stage 00:21:32.503 [Pipeline] } 00:21:32.520 [Pipeline] // catchError 00:21:32.533 [Pipeline] stage 00:21:32.536 [Pipeline] { (Stop VM) 00:21:32.551 [Pipeline] sh 00:21:32.834 + vagrant halt 00:21:35.382 ==> default: Halting domain... 00:21:40.745 [Pipeline] sh 00:21:41.028 + vagrant destroy -f 00:21:43.559 ==> default: Removing domain... 00:21:44.139 [Pipeline] sh 00:21:44.418 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:21:44.426 [Pipeline] } 00:21:44.441 [Pipeline] // stage 00:21:44.447 [Pipeline] } 00:21:44.461 [Pipeline] // dir 00:21:44.467 [Pipeline] } 00:21:44.481 [Pipeline] // wrap 00:21:44.488 [Pipeline] } 00:21:44.501 [Pipeline] // catchError 00:21:44.511 [Pipeline] stage 00:21:44.513 [Pipeline] { (Epilogue) 00:21:44.524 [Pipeline] sh 00:21:44.801 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:49.000 [Pipeline] catchError 00:21:49.002 [Pipeline] { 00:21:49.015 [Pipeline] sh 00:21:49.294 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:49.294 Artifacts sizes are good 00:21:49.302 [Pipeline] } 00:21:49.316 [Pipeline] // catchError 00:21:49.328 [Pipeline] archiveArtifacts 00:21:49.335 Archiving artifacts 00:21:49.435 [Pipeline] cleanWs 00:21:49.482 [WS-CLEANUP] Deleting project workspace... 00:21:49.482 [WS-CLEANUP] Deferred wipeout is used... 00:21:49.488 [WS-CLEANUP] done 00:21:49.490 [Pipeline] } 00:21:49.506 [Pipeline] // stage 00:21:49.511 [Pipeline] } 00:21:49.525 [Pipeline] // node 00:21:49.530 [Pipeline] End of Pipeline 00:21:49.571 Finished: SUCCESS